report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
The Veterans’ Health Care Eligibility Reform Act of 1996 authorized VA to provide certain medical services not previously available to veterans with non-service connected conditions. The Balanced Budget Act of 1997 authorized VA to use third-party health insurance payments to supplement its medical care appropriations. As part of VA’s 1997 strategic plan, VA expected that collections from third-party payments and co-payments would cover the majority of costs of care for these veterans, some of which VA has determined to have higher incomes. For fiscal year 2002, about a quarter of VA’s user population were higher income veterans. In September 1999, VA adopted a new fee schedule, called “reasonable charges,” which are itemized fees based on diagnoses and procedures. This schedule allows VA to more accurately bill for the care provided. By linking charges to the care provided, VA created new bill-processing demands—particularly in the areas of documenting care, coding that care, and processing bills per episode of care. First, VA must be prepared to provide the insurance company with supporting medical documentation for itemized charges. Second, VA must accurately assign medical diagnoses and procedure codes to set appropriate charges, a task that requires coders to search through medical documentation and various databases to identify all billable care. Third, VA must prepare a separate bill for each health care provider involved in the patient’s care and an additional bill when a hospital facility charge applies. To collect from health insurance companies, VA uses a four function process to manage the information needed to bill and collect third-party payments—also known as the Medical Care Collection Fund (MCCF) Revenue Cycle (see fig. 1). First, the patient intake function involves gathering insurance information and verifying that information with the insurance company as well as collecting demographic data on the veteran. Second, utilization review involves precertification of care in compliance with the veteran’s insurance policy, including continued stay reviews to determine medical necessity. Third, billing functions involve properly documenting the health care provided to patients by physicians and other health care providers. Based on the physician documentation, the diagnoses and medical procedures performed are coded. VA then creates and sends bills to insurance companies based on the insurance and coding information obtained. And fourth, the collections or accounts receivable function includes processing payments from insurance companies and following up on outstanding or denied bills. As discussed in prior OIG and GAO reports, reasons for untimely third- party billings were heavy caseloads and backlogs for cases to be coded. VA was unprepared to bill under reasonable charges initially in fiscal year 2000, particularly because of its lack of proficiency in developing medical documentation and coding to appropriately support a bill. As a result, VA reported that many of its medical centers developed billing backlogs. In January 2003, we reported that after initially being unprepared in fiscal year 2000 to bill reasonable charges, VA began improving its implementation of the processes necessary to increase its third-party billings and collections. In fiscal year 2002, VA submitted over 8 million third-party insurance bills that constituted a 54 percent increase over the number in fiscal year 2001. VA officials attributed increased third-party billings to, among other reasons, reductions in billing backlogs and an increasing number of patients with billable insurance. We also reported that collections could be increased by addressing operational problems such as unpaid accounts receivable and missed billing opportunities due to insufficient identification of insured patients, inadequate documentation to support billings, coding problems, and billing backlogs. To address these issues and further increase collections, VA has several initiatives under way and is continuing to develop additional ones. In September 2001, VA introduced its Veterans Health Administration Revenue Cycle Improvement Plan. This plan initially included 24 actions to improve revenue performance. After the establishment of the Chief Business Office (CBO) in May 2002, VA issued the Revenue Action Plan (Plan) that superceded the 2001 plan and includes 16 objectives. With the implementation of several actions in the Plan, VA has reported increases in the number of billings. For example, in fiscal year 2003, VA submitted 10 million bills, a 25 percent increase over the number of bills in fiscal year 2002 and a 160 percent increase over fiscal year 2000. VA also reported that its collections of third-party payments over the past few years continue to increase as shown in figure 2. For fiscal year 2003, VA reported that it collected third-party payments of $804 million, a 6 percent increase over the $760 million collected in 2002 and a 49 percent increase over the $540 million collected in fiscal year 2001. To gain an understanding of VHA’s policies and procedures and the related internal controls for the billings and collections, to identify key control activities, and to assess the design effectiveness of those controls, we obtained and reviewed VA and VHA directives, handbooks, and other policy guidance, and previous reports issued by VA’s OIG. We also conducted interviews and walkthroughs with VHA personnel and reviewed previous GAO reports. To assess whether key control activities for the two areas of operation were effectively implemented, we used a case study approach, reviewing transaction documentation at three VA medical centers. We selected medical centers with varying success in meeting established performance goals and other factors. Because we used a case study approach the results of our study cannot be projected beyond the transactions we reviewed. To determine whether key internal controls for billings were effectively implemented, we discussed billing requirements and procedures with VHA headquarters and medical center personnel. Because billing records were not in a usable format and time constraints did not permit us to put them in a usable format, we could not select a statistical sample. Instead, we made a non-statistical selection of 30 patients from each of the three medical center’s inpatient and outpatient billing records to perform tests to assess compliance with policies and procedures and to determine the number of days to bill third-party insurance companies. To determine whether key internal controls for collections were effectively implemented, we discussed requirements and procedures with VHA headquarters and medical center personnel. At each medical center we visited, we used the same 30 patients chosen for our billing tests to also assess compliance with accounts receivable policies and procedures, including VA Handbook 4800.14, Medical Care Debts (Handbook) and the Accounts Receivable Third-Party Guidebook. We reviewed and used as guides, the Standards for Internal Control in the Federal Government and the Internal Control Management and Evaluation Tool. The Comptroller General issued these internal control standards to provide the overall framework for establishing and maintaining internal control. According to these standards, internal control, also referred to as management control, comprises the plans, methods and procedures used to meet the missions, goals, and objectives of an organization. Internal control also serves as the first line of defense in safeguarding assets and preventing and detecting errors and fraud. We performed our work at VA medical centers in Cincinnati, Ohio; Tampa, Florida; and Washington, D.C., and at the VHA’s Chief Business Office in Washington, D.C. We conducted our review from March 2004 through June 2004 in accordance with U.S. generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Veterans Affairs or his designee. Written comments were received from the Secretary of Veterans Affairs and are reprinted in appendix I. Although VA has decreased the number of days it takes to bill for patient services and has increased its collections from third-party insurance companies since 2000, problems remain. At the three medical centers we visited, we found continuing weaknesses in the billings and collections processes that impair VA’s ability to maximize the amount of dollars paid by third-party insurance companies. For example, medical centers did not always bill insurance companies in a timely manner. According to medical center officials, timeliness of billing is affected by, among other things, (1) VA’s ability to verify and update a patient’s third-party insurance information, (2) whether physicians and other health care providers properly document the patient’s treatment so a bill can be coded appropriately, (3) the extent of manual intervention to process the bill, and (4) workload. We believe that improvements could be made in each of these areas. Further, the three medical centers we visited did not always pursue collections of accounts receivable in a timely manner or follow up on certain partially paid claims. Weaknesses in VA’s collection activities hamper its ability to collect all monies due to the agency from third-party insurance companies to pay for veterans’ growing demand for care. VA’s current Plan to implement and sustain effective collections operations is in process. However, the Plan has not been fully implemented. Therefore, it is too early to determine the extent to which it will address operational problems and increase collections. While VA reported that it has decreased the average number of days it takes to bill for patient services, we found that medical centers could further improve billing timeliness by continuing to address operational problems that slow down the process. These operational problems include, among other things, delays in verifying and updating patient insurance information, incomplete or inaccurate documentation of patient care by health care providers, manual intervention, and workload. VA’s billing process cuts across four functional areas, as shown in figure 3. Each phase of the billing process is dependent on the completeness and accuracy of information collected in the prior phases. Breakdowns occurring during any part of the process can affect the timeliness of billings. VA’s policies and procedures do not specify the number of days for a bill to be issued once health care services are rendered. In fiscal year 2003, VA’s Business Oversight Board established performance goals that were incorporated into the network and medical directors’ performance contracts. The goal for sending a bill within a set number of days was reduced periodically during fiscal year 2004. During the time of our review, the performance goal for billing third party insurance companies was an average of 50 days from the date of patient discharge. As of the end of the first quarter of fiscal year 2004, the cumulative average days to bill third parties for Tampa, Washington, D.C. and Cincinnati were 73, 69, and 44 respectively. At each of the three medical centers visited, we made a non-representative selection of 30 patients billed during the first quarter of fiscal year 2004. In evaluating the timeliness of billing, we used the then-in-effect performance standard of 50 days after patient discharge. We recognize that the cumulative billing times for the 90 cases selected do not represent the average days to bill, which VHA uses to measure each medical center’s performance. However, cases billed more than 50 days after patient discharge are illustrative of problematic issues that can delay billings. For the 90 cases selected, the number of days to bill at the three medical centers we visited ranged from 5 to 332 days, with almost 30 percent billed after 50 days. A summary of our results is shown in table 1. Promptly invoicing insurance companies for care provided is a sound business practice and should result in improved cash flow for VA. Officials at each of the three medical centers cited verifying and updating patients’ third-party insurance information as a continuing impediment to billing third-party insurance companies in a timely manner. They told us that this occurs because, among other reasons, some patients are reluctant to provide insurance information for fear that their insurance premiums will increase. Patients delay providing insurance information until well after commencement of treatment, and patients do not always provide current insurance information. Thus, additional time is required to research and verify the patients’ insurance coverage. Medical center officials also told us that incomplete or inaccurate documentation from health care providers continues to cause delays in billing third parties. If the coders do not have sufficient data from the provider to support a bill, the coding process can be delayed, thus hampering timely billing of third-party insurance companies. Further, without complete data on the actual health care services provided, the coders may also miscode the treatment, which could result in lost revenue. Another impediment to timely billing is that the billing process is not fully automated and manual intervention is required. For example, in certain cases, the medical diagnosis is transcribed onto a worksheet to be used for coding rather than being electronically transmitted. Additionally, before the coders can begin the coding process, they must first electronically download the listing of potential billable patients. Then the coders review the electronic medical records and assign diagnostic and procedure codes before a bill is generated. Further, due to system limitations, bills that exceed a certain dollar amount or number of medical procedure codes must be printed and mailed rather than transmitted electronically. For example, in Cincinnati bills greater than $100,000 or that have six or more medical procedure codes must be processed in this manner. Another contributing factor may be the workload levels at the medical centers. During the second quarter of fiscal year 2004, Cincinnati submitted 45,883 bills and had a staff of 13 coders. Concurrently, Tampa submitted 192,407 bills and had 16 coders and Washington D.C. issued 64,474 bills and had 8 coders. VHA data indicated that Cincinnati’s average billing time was under 50 days for the quarter and had the lowest bill to coder ratio. Conversely, Tampa and Washington, D.C. exceeded the 50-day performance goal and had a much higher bill to coder ratio. Assuming 60 workdays per quarter, we calculated the ratio of bills issued per day to the number of coders as shown in table 2. We recognize that other factors such as the number of billable encounters per bill and coder productivity may affect the billing workload. However, given the wide diversity of the bill to coder ratios, staffing may also be a contributing factor affecting days to code and issue bills. Weaknesses in collection activities hamper VA’s ability to collect all monies due to the agency from third-party insurance companies for veterans’ care. We found that the three medical centers we visited did not always pursue collections of accounts receivable in a timely manner or follow up on certain partially paid insurance claims. These two factors could negatively affect third-party collections. VA’s Handbook sets forth the requirements for collection of third-party accounts receivables. Also, in 2003, the VHA’s Chief Business Office issued the Accounts Receivable Third-Party Guidebook that lays out more detailed procedures. Both documents require that once a claim has been sent to the insurance company, staff should follow up on unpaid reimbursable insurance cases as follows: The first telephone follow-up is to be initiated within 30 days after the initial bill is generated. All telephone follow-ups are to be documented to include, at a minimum, the name, position, title and telephone number of the person contacted, the date of contact, appropriate second follow-up date if payment is not received, and a brief summary of the conversation. A second telephone follow-up on unresolved outstanding receivables is to be made on an appropriate (but unspecified) date and documented. A third follow-up call is to be made within 14 days of the second contact and documented with a summary of the conversation and an appropriate, but not specified, follow-up date. If no payment has been received by the next follow-up date, the case may be referred by the MCCF Coordinator to regional counsel for further action. We tested compliance with these policies for the same 30 cases selected for our billing tests at each of the three medical centers we visited. Regarding the first follow-up procedure, initial follow-up calls were made within 30 days for only 14, or about 22 percent, of the 64 cases for which billings had not been collected within 30 days. Second follow-up phone calls were not made in a timely manner either. We considered 15 days after the initial follow-up of 30 days to be an appropriate time frame since the third follow-up is to be made within 14 days after the second follow-up and cases are to be referred to collection agencies after 60 days. Delays in making second follow-up calls increase the risk that payments will not be collected. Within our selected cases, four second follow-up calls were either made more than 15 days after the first follow-up call or not at all. These bills had not been paid within 120 days after the bill was sent to the insurance company. Both the first and second follow-up calls require that staff document the contact’s name, title, telephone number, and expected follow-up date in the official records. However, we found that staff did not consistently do so. For example, for the 14 cases where a follow-up call was made during the first 30 days after the initial billing, only seven specified a follow-up date. Entering a follow-up date would serve as a reminder to make the second follow-up call. Further, we found that an unclear collection policy may have contributed to VA’s untimely second follow-up efforts. Specifically, VA’s Handbook requires that second follow-up telephone calls on unresolved outstanding receivables be made on an “appropriate date,” but that date is not specified (i.e., the number of days elapsed since the first contact). Specifying a follow-up date (i.e., 15 days after the first follow-up) or providing criteria for selecting an appropriate follow-up date would clarify this requirement and provide a benchmark on which compliance could be measured. Medical center officials at the three sites we visited told us that staff shortages and a heavy workload contributed to noncompliance with follow-up procedures. For example, Tampa officials told us that the accounts receivable staff typically have over 1,000 cases needing follow-up at any one time. The Cincinnati Medical Care Collection Fund (MCCF) supervisor told us that if two additional staff were available, they would be dedicated to following up on delinquent payments. During our review of the 90 selected cases, we noted wide variances between the amounts billed and amounts received for patients who were eligible for Medicare benefits. For example, in one of our selected cases, VA billed the secondary insurance company for $60,994 but received only $5,205, or about 9 percent. In non-Medicare cases, when the patient has primary and secondary insurance, VHA bills the primary insurance company and, depending on the amount collected, bills the secondary insurer for the residual amount. For Medicare patients who have secondary insurance (i.e., Medigap or Medicare Supplemental insurance), VA is generally entitled to receive payment only from the secondary insurance company. Thus far, VA has not been able to provide post-Medicare payment information (i.e., deductible and co-insurance amounts) to other insurance companies because Medicare is generally not required to pay and thus does not pay VA. Lacking information on what Medicare would pay if required to do so, VA does not know what amount to bill the secondary insurance companies because it does not know the residual amount. In such cases, VA bills the secondary insurance company for the full amount associated with the care provided—the amount that would be reimbursable by Medicare as well as the amount not covered by Medicare. The secondary insurance companies have been using a variety of methodologies for reimbursing VA and some do not pay because they are unable to determine the proper amount of reimbursement. As a result, in certain cases, VA receives very little, if any, reimbursement from the secondary insurance companies for such billings. The Handbook describes procedures for following up on partial payments from insurance companies. It states that payment by a third-party insurance company of an amount which is claimed to be the full amount payable under the terms of the applicable insurance policy or other agreement will normally be accepted as payment in full. The unpaid balance is to be written down to zero. However, if there is a considerable difference between the amount collected and the amount billed, the Handbook directs staff to take various actions to pursue potential additional revenue. At each of the three medical centers, we found that accounts receivable staff typically accepted partial payments from secondary insurance companies as payment in full and adjust the unpaid balance to zero. Because the medical centers do not have the post- Medicare information needed to pursue collection of the unpaid amounts, there may be failure to collect millions of dollars because partial payments are accepted as payment in full. VA reported that as of September 2003, the median age of all living veterans was 58 years, with the number of veterans 85 years of age and older totaling nearly 764,000. As these veterans age, the demand for care will increase as will the number of veterans eligible for Medicare. To be able to offset the cost of care through third-party collections, it will become even more imperative in the coming years for VA to collect the maximum amount possible from secondary insurance companies. VA’s current Revenue Action Plan includes 16 actions designed to increase collections by improving and standardizing the collections processes. Several of these actions are aimed at reducing billing times and backlogs, many of which have already been implemented. Specifically, medical centers are updating and verifying patients’ insurance information and improving health care provider documentation. In addition, hiring contractors to code and bill old cases is reducing backlogs. Further, the introduction of performance measures into managers’ performance contracts has provided an incentive for increased billings and collections. In addition to those actions already taken, VA has other initiatives under way such as automating the billing process by implementing the Patient Financial Services System (PFSS) and determining the amounts billable to Medicare secondary insurance companies through the use of an electronic Medicare Remittance Advice. To assist in updating and verifying patients’ insurance information, a problematic issue discussed earlier in our report, each site now has staff dedicated to (1) verify that insurance reported by the veteran is current, (2) determine insurance coverage if the patient does not declare any, (3) acquire pre-certifications of patient admissions, and (4) obtain authorization of procedures from the patient’s insurance company. Additionally, medical centers have taken actions to update demographic information on file, including insurance. These efforts help to reduce insurance denials, produce more accurate bills, and ensure that VA receives reimbursement for services provided. To assist in improving medical documentation, which we reported as a continuing operational issue, VA mandated physician use of the Computerized Patient Record System in December 2001 and reinforced its use through a VHA Directive in May 2003. The coders use the electronic medical records to determine what treatment each patient received and to document the diagnostic codes. In addition, the medical centers have been educating the physicians about the importance of completing the records. To reduce billing backlogs, VHA entered into an agreement with four vendors to code and assist with backlogs. The Washington, D.C. medical center hired a contractor to handle a backlog of 15,000 encounters. The contractor has certified staff for coding and billing and must meet 12 performance measures. The revenue officer told us that the backlog was eliminated in May 2004. In addition, in December 2003, VHA was given authority by the Office of Personnel Management to directly hire credentialed coders at industry-compatible salaries. In fiscal year 2003, VHA’s Chief Business Officer implemented industry- based performance metrics and reporting capabilities to identify and compare overall VA revenue program performance. Metrics were introduced to measure collections, days to bill, gross days revenue outstanding, and accounts receivable over 90 days. For both network and medical center directors, the metrics and associated performance targets were incorporated into annual performance contracts effective fiscal year 2003. VHA officials attribute much of the decrease in days to bill and increased billings and collections to these performance measures. For example, VA reported that nationally the average days to bill insurance companies for the first half of fiscal year 2004 was about 74 days, which is an improvement from their fiscal year 2000 average days to bill of 117 days. However, VHA’s average days to bill for that period exceeded the performance goals of 50 days and 47 days for the first and second quarters of fiscal year 2004, respectively. The industry standard is 10 days. In addition to actions already taken, VA’s Plan has several other initiatives under way for improving billing times and increasing collections. For example, the PFSS is designed to integrate the health care billing and accounts receivable software systems to replace VA’s current legacy system. The system is intended to increase staff efficiency through a streamlined, standardized, re-engineered process; create more accurate bills; and shorten bill lag times through automation. VA officials believe that this initiative, when implemented, will reduce manual intervention noted earlier in our report as a reason for delayed billings. However, implementation is behind schedule. Another effort under way, the electronic Medicare Remittance Advice project, helps to address obtaining allowable payments from secondary insurance companies, rather than accepting partial payments that are significantly lower than billed amounts as full payment. This project involves the electronic submission of claims to a fiscal intermediary to receive remittance advice on how Medicare would have paid the claim if it were legally bound to pay VA for care. The remittance advice, which will be attached to VA health care claims, will enable secondary insurance companies to determine the correct amount to reimburse VA. Further, VA believes it will be able to more accurately reflect the amount of its outstanding receivables and be in a strengthened position to follow up on partial payments, which it deems incorrect. The completion date for this project was November 2003 but has been delayed due to software issues. VA officials told us they plan to roll out the new system beginning in August 2004. Although the Plan provides another step forward in potentially improving operations and increasing collections, it is still in progress and many of the actions are not scheduled for implementation until at least fiscal year 2005. Therefore, it is too early to determine whether the Plan will successfully address operational problems and increase collections when fully implemented. The growing demands for veterans’ health care increase VA’s responsibility to supplement, as much as possible, its medical care appropriations with collections from insurance companies for treatment of non-service- connected conditions. VA is making progress in developing and implementing procedures to identify patients who can be billed for services, to bill for services correctly and in a timely manner, and to pursue collections. VA’s Plan to further improve billing and collection operations, however, is still a work in progress and could benefit from the performance of a workload analysis. In the interim, strengthening internal controls such as clarifying billing and claims follow-up procedures and consistently implementing policies and procedures could help reduce billing times and increase collections. Even assuming that its Plan works as contemplated, these additional controls are needed to maximize VA revenues to enhance its medical care budget. We are making five recommendations to facilitate more timely billings and improve collection operations. The Secretary of Veterans Affairs should direct the Under Secretary for Health to: Perform a workload analysis of the medical centers’ coding and billing Based on the workload analysis, consider making the necessary resource adjustments. Reinforce to accounts receivable staff that they should perform the first follow-up on unpaid claims within 30 days of the billing date, as directed by VA Handbook 4800.14, Medical Care Debts, and establish procedures for monitoring compliance. Reinforce the requirement for accounts receivable staff to enter the insurance company contact’s name, title and phone number and the follow-up date when making follow-up phone calls. Augment VA Handbook 4800.14, Medical Care Debts, by either specifying a date or providing instructions for determining an appropriate date for conducting second follow-up calls to insurance companies. VA provided written comments on a draft of this report. In its response, VA agreed with our conclusions and recommendations and reported that it is developing an action plan to implement them. Additionally, VA’s response stated that VHA is pursuing a number of strategies to improve overall performance toward achieving industry benchmarks. VA believes that the development of the Patient Financial Services System will address current billing system limitations and manual intervention and that the Medicare Remittance Advice project will assist VHA in pursuing partially paid claims. Also, in its response letter, VA included some technical comments that we have addressed in finalizing our report where appropriate. VA’s written comments are presented in appendix I. As arranged with your office, unless you release its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies of this report to the Secretary of Veterans Affairs, the Under Secretary for Health, interested congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-6906 or williamsm1@gao.gov; or Alana Stanfield, Assistant Director, at (202) 512-3197 or stanfielda@gao.gov. Major contributors to this report are acknowledged in appendix II. In addition to those named above, the following individuals made important contributions to this report: Teressa Broadie-Gardner, Lisa Crye, Jeffrey Isaacs, Sharon Loftin, Donell Ries, and Patricia Summers. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
In the face of growing demand for veterans' health care, GAO and the Department of Veterans Affairs Office of Inspector General (OIG) have raised concerns about the Veterans Health Administration's (VHA) ability to maximize its third-party collections to supplement its medical care appropriation. GAO has testified that inadequate patient intake procedures, insufficient documentation by physicians, a shortage of qualified billing coders, and insufficient automation diminished VA's collections. In turn, the OIG reported that VA missed opportunities to bill, had billing backlogs, and did inadequate follow-up on bills. While VA has made improvements in these areas, GAO was asked to review internal control activities over third-party billings and collections at selected medical centers to assess whether they were designed and implemented effectively. VA has continued to take actions to reduce billing times and increase third-party collections. Collections of third-party payments have increased from $540 million in fiscal year 2001 to $804 million in fiscal year 2003. However, at the three medical centers visited, GAO found continuing weaknesses in the billings and collections processes that impair VA's ability to maximize the amount of dollars paid by third-party insurance companies. For example, the three medical centers did not always bill insurance companies in a timely manner. Medical center officials stated that inability to verify and update patients' third-party insurance, inadequate documentation to support billings, manual processes and workload continued to affect billing timeliness. The detailed audit work at the three facilities GAO visited also revealed inconsistent compliance with follow-up procedures for collections. For example, collections were not always pursued in a timely manner and partial payments were accepted as payments in full, particularly for Medicare secondary insurance companies, rather than pursuing additional collections. VA's current Revenue Action Plan (Plan) includes 16 actions designed to increase collections by improving and standardizing collections processes. Several of these actions are aimed at reducing billing times and backlogs. Specifically, medical centers are updating and verifying patients' insurance information and improving health care provider documentation. Further, hiring contractors to code and bill old cases is reducing backlogs. In addition to actions taken, VA has several other initiatives underway. For example, VA is taking action to enable Medicare secondary insurance companies to determine the correct reimbursement amount, which will strengthen VA's position to follow up on partial payments that it deems incorrect. Although implementation of the Plan could improve VA's operations and increase collections, many of its actions will not be completed until at least fiscal year 2005. As a result, it is too early to determine the extent to which actions in the Plan will address operational problems and increase collections.
Investments in IT can enrich people’s lives and improve organizational performance. During the last two decades the Internet has matured from being a means for academics and scientists to communicate with each other to a national resource where citizens can interact with their government in many ways, such as by receiving services, supplying and obtaining information, asking questions, and providing comments on proposed rules. However, while these investments have the potential to improve lives and organizations, some federally funded IT projects can—and have— become risky, costly, unproductive mistakes. We have previously testified that the federal government has spent billions of dollars on failed and troubled IT investments, such as the Office of Personnel Management’s Retirement Systems Modernization program, which was canceled in February 2011, after spending approximately $231 million on the agency’s third attempt to automate the processing of federal employee retirement claims; the tri-agency National Polar-orbiting Operational Environmental Satellite System, which was stopped in February 2010 by the White House’s Office of Science and Technology Policy after the program spent 16 years and almost $5 billion; the Department of Veterans Affairs’ Scheduling Replacement Project, which was terminated in September 2009 after spending an estimated $127 million over 9 years; and the Department of Health and Human Services’ (HHS) Healthcare.gov website and its supporting systems, which were to facilitate the establishment of a health insurance marketplace by January 2014, encountered significant cost increases, schedule slips, and delayed functionality. In a series of reports we identified numerous planning, oversight, security, and system development challenges faced by this program and made recommendations to address them. In light of these failures and other challenges, last year we introduced a new government-wide high-risk area, Improving the Management of IT Acquisitions and Operations. 18F and USDS were formed in 2014 to help address the federal government’s troubled IT efforts. Both programs have similar missions of improving public-facing federal digital services. 18F was created in March 2014 by GSA with the mission of transforming the way the federal government builds and buys digital services. Agencies across the federal government have access to 18F services. Work is largely initiated by agencies seeking assistance from 18F and then the program decides how and if it will provide assistance. According to GSA, 18F seeks to accomplish its mission by providing a team of expert designers, developers, technologists, researchers, and product specialists to help rapidly deploy tools and online services that are reusable, less costly, and easier for people and businesses to use. In addition, 18F has several guiding principles, to include the use of open source development, user-centered design, and agile software development. 18F is an office within the Technology Transformation Service within GSA that was recently formed in May 2016. 18F is led by the Deputy Commissioner for the Technology Transformation Service, who reports to the service’s Commissioner. Prior to May 2016, 18F was located within the Office of Citizen Services and Innovative Technologies and reported to the Associate Administrator for Citizen Services and Innovative Technologies. In March 2016 GSA created a new organizational structure for 18F that centers around five business units. Custom Partner Solutions. Provides agencies with custom application solutions. Products and Platforms. Provides agencies with access to tools that address common government-wide needs. Transformation Services. Aims to improve how agencies acquire and manage IT by providing them with consulting services, to include new management models, modern software development practices, and hiring processes. Acquisition Services. Provides acquisition services and solutions to support digital service delivery, including access to vendors specializing in agile software development, and request for proposal development consultation. Learn. Provides agencies with education, workshops, outreach, and communication tools on developing and managing digital services. To provide the products and services offered by each business unit, 18F relied on 173 staff to carry out its mission, as of March 2016. The staff are assigned to different projects that are managed by the business units. According to18F officials, the program used special hiring authorities for the vast majority of its staff: Schedule A excepted service authorities were used to hire 162 staff. These authorities permit the appointment of qualified personnel without the use of a competitive examination process. GSA has appointed its staff to terms that are not to exceed 2 years. According to the Director of the 18F Talent division, after the initial appointment has ended, GSA has the option of appointing staff to an additional term not to exceed 2 years. GSA funds 18F through the Acquisition Services Fund—a revolving fund, which operates on the revenue generated from its business units rather than an appropriation received from Congress. The Federal Acquisition Service is responsible for managing this fund and uses it to invest in the development of 18F products and services that will be used by other organizations. 18F is to recover costs through the Acquisition Services Fund reimbursement authority for work related to acquisitions and the Economy Act reimbursement authority for all other projects. According to the memorandum of agreement between 18F and the Federal Acquisition Service, 18F, like all programs funded by the Acquisition Services Fund, is required to have a plan to achieve full cost recovery. In order to recover its costs, 18F is to establish interagency agreements with partner agencies and charges them for actual time and material costs, as well as a fixed overhead amount. Table 1 describes 18F’s revenue, expenses, and net revenue for fiscal years 2014 and 2015. Table 2 describes 18F’s projected revenue, expenses, and net revenue for fiscal years 2016 through 2019. As shown in table 2, according to its projections, 18F plans to generate revenue that meets or exceeds operating expenses and cost of goods sold beginning in fiscal year 2019. In May 2016, the GSA Inspector General reported on an information security weakness pertaining to 18F. Specifically, according to the report, 18F misconfigured a messaging and collaboration application, which resulted in the potential exposure of personally identifiable information (PII). 18F officials told us that, based on the preliminary results of their ongoing review, information such as individual’s first names, last names, e-mail addresses, and phone numbers were made available on the messaging and collaboration platform’s databases, which are managed by that application’s vendor. Those officials also stated that based on the preliminary results of their ongoing review, more sensitive PII, such as Social Security numbers and protected health information, were not exposed. They added that they are continuing a detailed review, in coordination with the GSA IT organization, to confirm that more sensitive PII were not made available. According to the Administration, in 2013 it initiated an effort that brought together a group of digital and technology experts from the private sector that helped fix Healthcare.gov. In an effort to apply similar resources to additional projects, in August 2014 the Administration announced the launch of USDS, to be led by an Administrator and Deputy Federal CIO who reports to the Federal CIO. According to OMB, USDS’s mission is to transform the most important digital services for citizens. USDS selects which projects it will apply resources to and generally initiates its effort with agencies. To accomplish its mission, USDS aims to recruit private sector experts (e.g., IT engineers and designers) and partner them with government agencies. With the help of these experts, OMB states that USDS applies best practices in product design and engineering to improve the usefulness, user experience, and reliability of the most important public- facing federal digital services. As of November 2015, USDS staff totaled about 98 individuals. Similar to 18F, USDS assigns individuals directly to projects aimed at achieving its mission. USDS has used special hiring authorities for the vast majority of it staff. Specifically: Schedule A excepted service. According to USDS, as of November 2015, 52 USDS staff members were hired using the schedule A excepted service hiring authority. According to the USDS Administrator, appointments made using this authority are not to exceed 2 years. At the end of that period, staff can be appointed for an additional term of no more than 2 years. Intermittent consultants. According to USDS, as of November 2015, 39 USDS staff members were intermittent consultants—that is, individuals hired through a noncompetitive process to serve as consultants on an intermittent basis or without a regular tour of duty. The USDS Administrator explained that some of these staff are eventually converted to temporary appointments under the Schedule A authority. According to its Administrator, USDS does not generally make permanent appointments for its staff because it allows the program to continuously bring in new staff and ensure that its ideas are continually evolving. USDS reported spending $318,778 during fiscal year 2014 and approximately $4.7 million during fiscal year 2015. For fiscal year 2016, USDS plans to spend approximately $14 million, and the President’s fiscal year 2017 budget estimated obligations of $18 million for USDS. In an effort to make improvements to critical IT services throughout the federal government, the Presidents’ Budget for fiscal year 2016 proposed funding for the 24 Chief Financial Officers Act agencies, as well as the National Archives and Records Administration, to establish digital services teams. USDS policy calls for these agencies to, among other things, hire or designate an executive for managing their digital services teams. According to USDS policy, the digital service team leader is to report directly to the head of the agency or the deputy. Additionally, USDS has established a hiring pipeline for digital service experts—that is, a unified process managed by USDS for accepting and reviewing applications, performing initial interviews, and providing agencies with candidates for their digital service teams. According to OMB, before using this service, agencies must agree to a charter with the USDS Administrator. Over the last three decades, several laws have been enacted to assist federal agencies in managing IT investments. For example, the Paperwork Reduction Act of 1995 requires that OMB develop and oversee policies, principles, standards, and guidelines for federal agency IT functions, including periodic evaluations of major information systems. In addition, the Clinger-Cohen Act of 1996, among other things, requires agency heads to appoint CIOs and specifies many of their responsibilities. With regard to IT management, CIOs are responsible for implementing and enforcing applicable government-wide and agency IT management principles, standards, and guidelines; assuming responsibility and accountability for IT investments; and monitoring the performance of IT programs and advising the agency head whether to continue, modify, or terminate such programs. Most recently, in December 2014, IT reform legislation (commonly referred to as Federal Information Technology Acquisition Reform Act or FITARA) was enacted, which required most major executive branch agencies to ensure that the CIO had a significant role in the decision process for IT budgeting, as well as the management, governance, and oversight processes related to IT. The law also required that CIOs review and approve (1) all contracts for IT services prior to executing them and (2) the appointment of any other employee with the title of CIO, or who functions in the capacity of a CIO, for any component organization within the agency. OMB also released guidance in June 2015 that reinforces the importance of agency CIOs and describes how agencies are to implement the law. OMB plays a key role in helping federal agencies address these laws and manage their investments by working with them to better plan, justify, and determine how much they need to spend on projects and how to manage approved projects. Within OMB, the Office of E-Government and Information Technology, headed by the Federal CIO, directs the policy and strategic planning of federal IT investments and is responsible for oversight of federal technology spending. As part of our ongoing work, we determined that 18F and USDS have provided a variety of development and consulting services to agencies to support their technology efforts. Specifically, between March 2014 and August 2015, 18F staff helped 18 agencies with 32 projects and generally provided six types of services to the agencies, the majority of which related to development work. In addition, between August 2014 and August 2015, USDS provided assistance on 13 projects at 11 agencies and provided seven types of consulting services. Further, agencies were generally satisfied with the services they received from 18F and USDS. Specifically, of the 26 18F survey respondents, 23 were very satisfied or moderately satisfied and 3 were moderately dissatisfied. For USDS, all 9 survey respondents were very satisfied or moderately satisfied. Between March 2014 and August 2015, GSA’s 18F staff helped 18 agencies with 32 projects, and generally provided services relating to its five business units: Custom Partner Solutions, Products and Platforms, Transformation Services, Acquisition Services, and Learn. In addition, 18F also provided agency digital service team candidate qualification reviews in support of USDS. Custom Partner Solutions. 18F helped 11 agencies with a total of 19 projects relating to developing custom software solutions. Out of the 19 projects, 12 were related to website design and development. For example, regarding GSA’s Pulse project—a website that displays data about the extent to which federal websites are adopting best practices, such as hypertext transfer protocol over Secure Sockets Layer (SSL)/ Transport Layer Security (TLS) (HTTPS)—18F designed, developed, and delivered the first iteration of Pulse within 6 weeks of the project kick-off. According to the GSA office responsible for managing the project, the first iteration has led to positive outcomes for government-wide adoption of best practices; for example, between June 2015 and January 2016, the percentage of federal websites using https increased from 27 percent to 38 percent. As another example, officials from the Department of Education’s college choice project stated that 18F helped develop the College Scorecard website, which the public can use to search among colleges to find schools that meet their needs (e.g., degrees offered, location, size, graduation rate, average salary after graduation).44 18F also helped two agencies, HHS and the Department of Defense, on two projects to develop application programming interfaces—sets of routines, protocols, and tools for building software applications that specify how software components should interact. https://collegescorecard.ed.gov/. Acquisition Services. 18F helped seven agencies on seven projects relating to acquisition services consulting. For example, 18F provided the Department of State’s Bureau of International Information Programs with cloud computing services offered under a GSA blanket purchase agreement (BPA)—specifically, cloud management services (e.g., developers, testing and quality assurance, cloud architect) and infrastructure-as-a-service. According to the Department of State, the department was able to deploy its instance of the infrastructure service only 1 month after it executed an interagency agreement with 18F. According to Social Security Administration officials, 18F helped the agency to incorporate agile software development practices into their requests for proposals for their Disability Case Processing System. Learn. 18F provided services to four agencies on four projects regarding training, such as educating agency officials on agile software development. For example, 18F conducted training workshops on agile software development techniques with the Social Security Administration and Small Business Administration. In addition, according to the Department of Labor’s Wage and Hour Division officials, 18F conducted a 3-day workshop on IT modernization. Transformation Services. 18F assisted two agencies on two projects to help acquire the people, processes, and technology needed to successfully deliver digital services. For example, 18F assisted the Environmental Protection Agency on an agency-wide technology transformation. According to an official within the office of the CIO, 18F assisted the agency with e-Manifest—a system used to track toxic waste shipments. The official noted that 18F provided user-centered design, agile coaching, prototype development services, and agile and modular acquisition services. Further, the official stated that 18F helped turn around the project and significantly decreased the time of delivery for e-Manifest. Products and Platforms. 18F helped two agencies on two projects related to developing software solutions that can potentially be reused at other federal agencies. For example, according to GSA officials responsible for managing GSA’s Communicart project, 18F provided the agency with an e-mail-based tool for approving office supply purchases. Agency digital service team candidate qualification review. 18F worked with USDS to recruit and hire team members for agency digital service teams. According to 18F officials, it provided USDS with subject matter experts to review qualifications of candidates for agency digital service teams. Of the 32 projects, 6 are associated with major IT investments. Cumulatively, the federal government plans to spend $853 million on these investments in fiscal year 2016. Additionally, risk evaluations performed by CIOs that were obtained from the IT Dashboard showed that three of these investments were rated as low or moderately low risk and three investments were rated medium risk. Table 3 describes the associated investments, including their primary functional areas, planned fiscal year 2016 spending, and CIO rating as of May 2016. 18F is also developing products and services—including an agile delivery service blanket purchase agreement (BPA), cloud.gov, and a shared authentication platform: Agile delivery service BPA. 18F established this project in order to support its need for agile delivery services, including agile software development. In August and September 2015, GSA awarded BPAs to 17 vendors. The BPAs are for 5 years and allow GSA to place orders against them for up to 13 specific labor categories relating to agile software development (e.g., product manager, backend web developer, agile coach) at fixed unit prices. The BPAs do not obligate any funds; rather, they enable participating vendors to compete for follow-on task orders from GSA. In cases where 18F determines that it should use the agile BPA to provide services to partner agencies, GSA anticipates that 18F will work with that agency to develop a request for quotations and the other documents needed for a competition with agile BPA vendors. In March 2016 18F released its first request for quotations under the agile BPA for a task order relating to building a web-based dashboard that would describe the status of vendors in the certification process for FedRAMP—a government-wide program, managed by GSA, to provide joint authorizations and continuous security monitoring services for cloud computing services for all federal agencies. GSA anticipates that the time required to complete the process from releasing a request for quotations to task order issuance will typically take between 4 to 8 weeks. The initial BPAs were established under the first of three anticipated award pools—all of which are part of the “alpha” component of the Agile BPA project. 18F officials stated that they planned to establish BPAs for the other two pools in June 2016. They also anticipate a future beta version of the project that could potentially allow federal agencies beyond 18F to issue task orders directly to vendors. Officials stated that they expect to have a plan for the next steps of the beta version of this project by December of 2017. 18F officials have also expressed interest in creating additional marketplaces, such as those relating to data management, developer productivity tools, cybersecurity, and health IT. As of March 2016, 18F did not have time frames for when it planned to develop these additional marketplaces. Cloud.gov.18F also developed cloud.gov service, which is an open source platform-as-a-service that agencies can use to manage and deploy applications. 18F initially built cloud.gov in order to enable the group to use applications it developed for partner agencies. In creating the service, 18F decided to offer the service to other agencies because, according to 18F officials, cloud.gov offers a developer-friendly, secure platform, with tools that agencies can use to accelerate the process of assessing information security controls and authorizing systems to operate. According to 18F, the goal of cloud.gov is to provide government developers and their contractor partners the ability to easily deploy systems to a cloud infrastructure with better efficiency, effectiveness, and security than current alternatives. According to a roadmap for cloud.gov, 18F plans to receive full FedRAMP Joint Authorization Board approval for this service by August 2016. Once available, the group anticipates requiring agencies to pay for this service through an interagency agreement with 18F. Shared authentication platform. In May 2016, 18F announced that it was initiating an effort to create a platform for users who need to log into federal websites for government services. According to 18F, this system is designed to be each citizen’s “one account” with the government and allow the public to verify an identity, log into government websites, and if necessary, recover an account. As of May 2016, 18F plans to conduct prototyping activities through September 2016 and did not have plans beyond that time frame. In addition to developing future products and services, 18F created a variety of guides and standards for use internally as well by agency digital service teams. These guides address topics such as accessibility, application programming interfaces, and agile software development. From August 2014 through August 2015, USDS provided assistance on 13 projects across 11 agencies. The group generally provided seven types of consulting services: quality assurance, problem identification and recommendations, website consultation, system stabilization, information security assessment, software engineering, and data management. Quality assurance. Three of the 13 projects related to providing quality assurance services. For example, regarding the Social Security Administration’s Disability Case Processing System, USDS reviewed the quality of the software and made recommendations that, according to the agency, resulted in costs savings. Additionally, for the Departments of Veterans Affairs and Defense Service Treatment Record project, USDS provided engineers who identified and resolved errors in the process of exchanging records between the two departments, according to the Department of Veterans Affairs. Further, for the HHS Healthcare.gov system, the group performed services aimed at optimizing the reliability of the system, according to HHS. Problem identification and recommendations. USDS identified problems and made recommendations for three projects. For all three projects, it performed a discovery sprint—a quick (typically 2 week) review of an agency’s challenges, which is to culminate in a clear understanding of the problems and recommendations for how to address the issues. For example, it performed a discovery sprint for the Department of the Treasury Internal Revenue Service that focused on three areas: authentication of taxpayers, modernizing systems through event-driven architecture, and redesigning the agency’s website. USDS delivered a report to the Internal Revenue Service with recommendations and also suggested that work initially focus on taxpayer authentication. Consistent with these recommendations, the group and the agency decided to initially focus on authentication, to include re-opening of the online application GetTranscript. For the Department of Justice Federal Bureau of Investigation’s National Incident Based Reporting System, according to USDS, the program performed a discovery sprint and made several recommendations for accelerating deployment of the system. Website consultation. USDS provided consultation services for three agency website projects. For example, for the Office of the U.S. Trade Representative’s Trans-Pacific Partnership Trade Agreements website, USDS provided website design advice and confirmed that the agency had the necessary scalability to support the number of anticipated visitors. Additionally, it consulted with the Office of Personnel and Management (OPM) on the design, implementation, and development of a website for providing information on reported data breaches. System stabilization. For the Department of State’s Consular Consolidated Database, according to USDS, it helped stabilize the system and return it to operational service after a multi-week outage in June 2015. Information security assessment. USDS helped with an information security assessment regarding Electronic Questionnaires for Investigations Processing, which encompasses the electronic applications used to process federal background check investigations. Software engineering. For the Department of Homeland Security U.S. Citizenship and Immigration Services Transformation project, USDS’s software engineering advisors provided guidance on private sector best practices in delivering modern digital services. According to the department, the group’s work has supported accomplishments such as increasing the frequency of software releases and improving adoption of agile development best practices. Data management. For the Department of Homeland Security Office of Immigration Statistics, USDS helped to develop monthly reports on immigration enforcement priority statistics. According to the department, USDS supported the development of processes for obtaining data from other offices within the department and generating the monthly reports. According to the department, after 7 weeks of working with USDS, it was able to develop a proof of concept that reduced the report generating process from a month to 1 day. Seven of the 13 projects are associated with major IT investments. Cumulatively, the federal government plans to spend over $1.24 billion on these investments in fiscal year 2016. Three investments were rated by their CIOs as low or moderately low risk and four investments were rated as being medium risk. Table 4 describes the associated investments, including their primary functional areas planned fiscal year 2016 spending, and CIO rating as of May 2016. In addition to helping agencies improve IT services, USDS has developed guidance for agencies. For example, it developed the Digital Services Playbook to provide government-wide recommendations on practices for building digital services. The group also created the TechFAR Handbook to explain how agencies can use the Digital Services Playbook in ways that are consistent with the Federal Acquisition Regulation. Further, USDS, in collaboration with 18F, developed the draft version of U.S. Web Design Standards, which includes a visual style guide and a collection of common user interface components. With this guide, USDS aims to improve government website consistency and accessibility. In addition to developing guidance, USDS, in collaboration with OMB’s Office of Federal Procurement Policy, used challenge.gov to incentivize the public to create a digital service training program for federal contract professionals. The challenge winner received $250,000 to develop and pilot a training program. Additionally, the Deputy Administrator for USDS stated that 30 federal contract professionals from more than 10 agencies completed this pilot program in March 2016. According to OMB, the program is being revised and transitioned to the Federal Acquisition Institute, where it will be included as part of a certification for digital service contracting officers. In response to a satisfaction survey we administered to agency managers of selected 18F and USDS projects, a majority of managers were satisfied with the services they received from the groups. Specifically, the average score for services provided by 18F was 4.38 (on a 5-point satisfaction scale, where 1 is very dissatisfied and 5 is very satisfied) and the average score for the services provided by USDS was 4.67. Table 5 describes the survey results for 18F and USDS. In addition to providing scores, the survey respondents also provided written comments. Regarding 18F, five factors were cited by two or more respondents as contributing to their satisfaction with the services the program provided: delivering quality products and services, providing good customer service, completing tasks in a timely manner, employing staff with valuable knowledge and skills, and providing valuable education to agencies. For example, one respondent stated that 18F has an expert staff that helped the team understand agile software development and incorporate user-centered design into the agency’s development process. With respect to USDS, four factors were cited by two or more respondents as contributing to their satisfaction with its services: delivering quality services, providing good customer service, completing tasks in a timely manner, and employing staff with valuable knowledge and skills. For instance, one respondent stated that USDS responded to the agency’s request in a matter of hours, quickly developed an understanding of the agency’s IT system, and pushed to improve the system, even in areas beyond the scope of USDS’s responsibility. Although the majority of agencies were satisfied, a minority of respondents provided written comments describing their dissatisfaction with services provided by 18F. For example, six respondents cited poor customer service, four respondents cited higher than expected costs, and one respondent stated that 18F’s use of open source code may not meet the agency’s information security requirements. In a written response to these comments, 18F stated that it has received a variety of feedback from its partners and had modified and updated its processes continuously over the past 2 years. For example, with respect to higher than expected costs, 18F stated that project costs sometimes needed to be adjusted mid-project to address, among other things, higher than expected infrastructure usage or unexpected delays. To address this issue, 18F stated that it uses the assistance of subject matter experts to estimate project costs, and wrote a guide to assist with, among other things, better managing the budgets of ongoing projects. Regarding 18F’s use of open source code, it stated that it has worked with its partners to discuss the use of open source software and information security practices. To assess actual results, prioritize limited resources, and ensure that the most critical projects receive attention, entities that provide IT services, such as USDS and 18F, should establish and implement the following key practices. Define outcome-oriented goals and measure performance. Our previous work and federal law stress the importance of focusing on outcome-oriented goals and performance measures to assess the actual results, effects, or impact of a program or activity compared to its intended purpose. Goals should be used to elaborate on a program’s mission statement and should be aligned with performance measures. In turn, performance measures should be tied to program goals and demonstrate the degree to which the desired results were achieved. To do so, performance measures should have targets to help assess whether goals were achieved by comparing projected performance and actual results. Finally, goals and performance measures should be outcome-oriented—that is, they should address the results of products and services. Establish and implement procedures for prioritizing IT projects. We have reported that establishing and implementing procedures, to include criteria, for prioritizing projects can help organizations consistently select projects based on their contributions to the strategic goals of the organization. Doing so will better position agencies to effectively prioritize projects and use the best mix of limited resources to move toward its goals. In our draft report, we determined that 18F has developed several outcome-orientated goals, performance measures, and procedures for prioritizing projects, which it has largely implemented. However, not all of its goals are outcome-oriented and it has not yet measured program performance. Define Outcome-Oriented Goals and Measure Performance At the conclusion of our review in May 2016, 18F provided 5 goals and 17 associated performance measures that the organization aims to achieve by September 2016 (see table 6). To 18F’s credit, several of its goals and performance measures appear to be outcome-oriented. For example, the goal of delivering two government-wide platform services and the associated performance measures are outcome-oriented in that they address results—that is, delivering services to partner agencies. However, not all of the goals and performance measures appear to be outcome-oriented. For example, the goal of growing 18F to 215 staff while sustaining a healthy culture and its associated measure of hiring 47 staff do not focus on results of products or services. Further, not all of the performance measures have targets. For example, seven of the performance measures state that 18F will establish performance indicators, but 18F has yet to do so. Moreover, 18F does not have goals and associated measures that describe how it plans to achieve its mission after September 2016. In addition, although 18F is required to have a plan to achieve full cost recovery, it has yet to recover costs and its projections for when this will occur have slipped over time. Specifically, in June 2015, 18F projected that it would fully recover its costs for an entire fiscal year beginning in 2016; however, in May 2016, 18F provided revised projections indicating that it would recover costs beginning in fiscal year 2019. Those projections also indicated that, in the worst case, it would not do so through 2022, the final year of its projections. Establishing performance measures and targets that are tied to achieving full cost recovery would help management gauge whether the program is on track to meet its projections. However, 18F has not established such performance measures and targets. Finally, 18F has yet to fully assess the actual results of its activities. Specifically, the group has not assessed its performance in accordance with the 17 performance measures it developed. 18F’s then-parent organization assessed its own performance quarterly beginning in the 4th quarter of fiscal year 2015, including for measures that 18F was responsible for. However, this review process did not include or make reference to the 17 measures developed to gauge 18F’s performance, and thus do not provide insight into how well it is achieving its own mission. In a written response, GSA stated that 18F performance is measured as part of the Technology Transformation Service’s goals and measures and that these goals and measures should form the basis for our review. However, the Technology Transformation Service’s goals and measures do not describe how GSA aims to achieve the specific mission of 18F. Until it establishes goals and performance measures beyond September 2016, ensures that all of its goals and performance measures are outcome-oriented, and that its performance measures have targets, 18F will not have clear definition of what it wants to accomplish. Additionally, without developing performance measures and targets tied to achieving full cost recovery, GSA will lack a fully defined approach to begin recovering all costs in fiscal year 2019. Further, until 18F fully measures actual results, it will not be positioned to assess the status of its activities and determine the areas that need improvement. Establish and Implement Procedures for Prioritizing IT Projects 18F has developed procedures, including criteria, for prioritizing projects and largely implemented its procedures. Specifically, according to the Director of Business Strategy, potential projects are discussed during weekly intake meetings. As part of these meetings, 18F discusses project decision documents, which outline the business, technical and design elements, as well as the schedule, scope, and resources needed to fulfill the client’s needs. Using these documents, 18F determines whether proposed projects meet, among other things, the following criteria: (1) the project is aligned with the products and services offered by 18F, (2) it can be completed in a time frame that meets the agency’s needs and at a cost that fits the agency’s budget, and (3) the project’s government transformation potential (e.g., impact on the public, cost savings). These documents are used by the business unit leads to make a final decision about whether to accept the projects. 18F has largely implemented its procedures. To its credit, with respect to the 14 projects that 18F selected since establishing its prioritization and selection process, 18F developed a decision document for 12 of the 14 projects. However, 18F did not develop a decision document for the 2 remaining projects—the Nuclear Regulatory Commission Master Data Management project and GSA’s labs.usa.gov project. With respect to the Nuclear Regulatory Commission Master Data Management project, 18F officials explained that this project only required staff from one division; as such, that division was able to independently prioritize and select this project. Additionally, regarding the GSA labs.usa.gov project, 18F officials said the Associate Administrator for Office of Citizen Services and Innovative Technologies directed 18F to provide assistance. If 18F consistently follows its process for prioritizing projects, it will be better positioned to apply resources to IT projects with the greatest need of improvement. As part of our ongoing work, we determined that while USDS has developed a process for prioritizing projects and program goals, it has not fully implemented important program management practices. Define Outcome-Oriented Goals and Measure Performance In response to our inquiry, in November 2015 USDS developed four goals to be achieved by December 2017: (1) recruit and place over 200 digital service experts in strategic roles at agencies and cultivate a continually growing pipeline of quality technical talent through USDS, (2) measurably improve five to eight of the government’s most important services, (3) begin the implementation of at least one outstanding common platform, and (4) increase the quality and quantity of technical vendors working with government and cultivate better buyers within government. Additionally, USDS established a performance measure with a target for one of its goals. Specifically, it has a measure for its first goal as it plans to measure the extent to which it will hire 200 digital service experts by December 2017. To its credit, several of the goals appear to be outcome-oriented. For example, improving five to eight services is outcome-oriented in that it addresses results. However, USDS has not established performance measures or targets for its other goals. In addition, the program’s first goal—recruit and place over 200 digital service experts in strategic roles at agencies and cultivate a continually growing pipeline of quality technical talent through USDS—does not appear to be outcome-oriented. Further, USDS has only measured actual results for one of its goals. Specifically, for the goal of placing digital service experts at agencies, as of May 2016, USDS officials stated that they had 152 digital service experts. However, USDS has not measured actual results for the other three goals. USDS officials provided examples of how they informally measure performance for the other three goals. For example, for the goal of measurably improving five to eight of the government’s most important services, the USDS Administrator stated that approximately 1 million visitors viewed the Department of Education’s College Scorecard website in the initial days after it was deployed. However, USDS has not documented these measures or the associated results to date. Until USDS ensures that all of its goals are outcome- oriented and establishes performance measures and targets for each goal, it will be difficult to hold the program accountable for results. Additionally, without an assessment of actual results, it is unclear what impact USDS’s actions are having relative to its mission and whether investments in agency digital service teams are justified. Establish and Implement Procedures for Prioritizing Projects USDS has developed procedures and criteria for prioritizing projects. To identify projects to be considered, USDS is to use, among other sources, a June 2015 OMB report to Congress that identifies the 10 highest-priority federal IT projects in development. To prioritize projects USDS has the following three criteria, which are listed in their order of importance (1) What will do the greatest good for the greatest number of people in the greatest need? (2) How cost-efficient will the USDS investment be? and (3) What potential exists to use or reuse a technological solution across the government? Using these criteria, USDS intends to create a list of all potential projects, to include their descriptions and information on resources needs. This list is to be used by USDS leadership to make decisions about which projects to pursue. To its credit, USDS created a list of all potential, ongoing, and completed projects, which included project descriptions and resource needs. Additionally, USDS has engaged with 6 of the 10 priority IT projects identified in the June 2015 report, including the Department of Health and Human Services’ healthcare.gov project and the Department of Homeland Security’s U.S. Citizenship and Immigration Services Transformation. Additionally, according to a USDS staff member, USDS considered the remaining 4 projects and decided not to engage with them to date. However, USDS has yet to develop a quarterly report on the 10 high priority programs, which it was directed by Congress to develop. Specifically, in December 2015, Congress modified its direction for the Executive Office of the President to develop the reports regarding the top 10 high priority programs and specifically called for USDS to do so on a quarterly basis. According to a USDS staff member, a second top 10 high priority investment report has been drafted and will be finalized prior to the issuance of our report. However, the second top 10 report will address the former congressional direction for the Executive Office of the President to develop reports and OMB did not have a time frame for when USDS would begin to develop reports that address the modified congressional direction. Until USDS develops a time frame for the report on the top 10 programs, develops the report within that time frame and on a quarterly basis thereafter, and considers the programs identified in these reports as part of its prioritization process, USDS has less assurance that it will apply resources to the IT projects with the greatest need of improvement. To help agencies effectively deliver digital services, the President’s Budget for fiscal year 2016 proposed funding for digital service teams at 25 agencies—the 24 Chief Financial Officers Act agencies, as well as the National Archives and Records Administration. According to USDS policy, agencies are to, among other things, hire or designate an executive for managing their digital services teams. In addition, USDS has called for the deputy head of these agencies (or equivalent) to, among other things, agree to a charter with the USDS Administrator. After agreeing to a charter, according to USDS, agencies can use USDS’s hiring pipeline for digital service experts. Of the 25 agencies that requested funding to establish teams, OMB has established charters with 6 agencies for their digital service teams—the Departments of Defense, Health and Human Services, Homeland Security, the Treasury, State, and Veterans Affairs. The charters establish the executives for managing digital service teams and describe the reporting relationships between the team leaders and agency leadership. In addition, according to the Deputy USDS Administrator, USDS plans to establish charters with an additional 3 agencies by the end of the fiscal year—the Department of Education, the Social Security Administration, and Small Business Administration. For the remaining 16 agencies, as of April 2016, 8 agencies reported that they plan to establish digital service teams but have yet to establish charters with USDS—the Department of Housing and Urban Development, Environmental Protection Agency, General Services Administration, National Aeronautics and Space Administration, National Archives and Records Administration, National Science Foundation, Nuclear Regulatory Commission, and Office of Personnel Management. The other 8 agencies reported that they do not plan to establish digital service teams by September 2016 because they did not receive requested funding—the Departments of Agriculture, Commerce, Energy, the Interior, Justice, Labor, and Transportation; and the U.S. Agency for International Development. Table 7 summarizes agency and OMB efforts to establish digital service teams. Congress has recognized the importance of having a strong agency CIO. In 1996, the Clinger-Cohen Act established the position of agency CIO and, among other things, gave these officials responsibility for IT investments, including IT acquisitions, monitoring the performance of IT programs, and advising the agency head whether to continue, modify, or terminate such programs. More recently, in December 2014, FITARA was enacted into law. It required most major executive branch agencies to ensure that the CIO has a significant role in the decision process for IT budgeting, as well as the management, governance, and oversight processes related to IT. The law also required that CIOs review and approve (1) all contracts for IT services associated with major IT investments prior to executing them and (2) the appointment of CIOs for any component within the agency. OMB also released guidance in June 2015 that reinforces the importance of agency CIOs and describes how agencies are to implement FITARA. Further, according to our prior work, leading organizations clearly define responsibilities and authorities governing the relationships between the CIO and other agency components that use IT. Only one of the four agencies we selected for review—the Department of Homeland Security—defined the relationship between the executive for managing the digital services team and the agency CIO. Specifically, the Department of Homeland Security established a charter for its digital services team, signed by both the Administrator of USDS and the Deputy Secretary, which outlines the reporting structure and authorities for the digital services executive, including the relationship with the CIO. For example, according to the charter, the digital services executive will report on a day-to-day basis to the CIO, but will also report directly to the Deputy Secretary. However, the other three agencies we reviewed—the Departments of Defense, State, and Veterans Affairs—have not defined the role of agency CIOs with regard to these teams. Although they have established charters for these teams, which describe the reporting structure between the digital services executive and senior agency leadership, the charters do not describe the role of the agencies’ CIOs and they have not documented this information elsewhere. The Department of Defense CIO and the Department of Veterans Affairs Principal Deputy Assistant Secretary for the Office of Information and Technology told us that they work closely with their agency digital service team. However, while these officials have coordinated with the agency digital service teams, the roles and responsibilities governing these relationships should be described to ensure that CIOs can carry out their statutory responsibilities. In contrast to the Departments of Defense and Veterans Affairs, the State CIO told us that he has had limited involvement in the department’s digital services team. He added that he believes it will be important for CIOs to be involved in agency digital services teams in order to sustain their efforts. In written comments, OMB acknowledged that the Department of State’s charter does not describe the role of the CIO, but stated that the Departments of Defense and Veterans Affairs digital service team charters at least partially address the relationship between digital service teams and agency CIOs. Specifically, with respect to the Department of Defense, OMB stated that the charter calls for senior leadership, including the department’s CIO, to ensure that digital service team projects proceed without delay. Additionally, according to OMB, the charter for the Veterans Affairs digital service team calls for the team to be located in and supported by VA’s CIO organization. However, these requirements do not address the specific responsibilities or authorities of the Veterans Affairs’ CIO with regard to the digital service team. The lack of defined relationships is due, in large part, to the fact that USDS policy on digital service teams does not describe the expected relationship between agency CIOs and these teams. As previously mentioned, USDS policy calls for the digital service team leader to report directly to the head of the agency or its deputy; however, it does not describe the expected responsibilities and authorities governing the relationship of the CIO. Until OMB updates the USDS policy to clearly define the responsibilities and authorities governing the relationships between CIOs and digital services teams and ensures that existing agency digital service team charters or other documentation reflect this policy, agency CIOs may not be effectively involved in the digital service teams. This is inconsistent with long-standing law, as well as the recently enacted FITARA, and OMB’s guidance on CIO responsibilities, and may hinder the ability for CIOs to carry out their responsibilities for IT management of the projects undertaken by the digital service teams. In summary, by hiring technology and software development experts and using leading software development practices, both 18F and USDS have provided a variety of useful services to federal agencies. Most surveyed agency project managers that partnered with 18F and USDS were satisfied with the services provided. It is important for USDS and 18F to establish outcome-oriented goals, measure performance, and prioritize projects, particularly since these are valuable management tools that could aid in the transfer of knowledge when critical temporary staff leave these organizations and are replaced. To their credit, both 18F and USDS have developed several outcome- orientated goals and procedures for prioritizing projects. However, the goals and associated performance measures and targets were not always outcome-oriented. Additionally, they have not fully measured program performance. As a result, it will be difficult to hold the programs accountable for results. Moreover, without documented measures and results for USDS, it is unclear whether investments in agency digital service teams are justified. Further, by delaying the date for when it projects to fully recover its costs and not having associated performance measures, 18F is at risk of not having the information necessary for GSA leadership to determine whether to continue using the Acquisition Services Fund for 18F operations. Finally, USDS has yet to develop a quarterly report on the 10 high priority programs, meaning that it may be applying resources to investments that are not in the most need of their assistance. Although OMB has called for agencies to establish digital service teams, USDS policy does not require agencies to define the expected responsibilities and authorities governing the relationships between CIOs and digital service teams. To fulfill their statutory responsibilities, including as most recently enacted in FITARA and reinforced in OMB guidance, and ensure that CIOs have a significant role in the decision making process for projects undertaken by the digital service teams, such defined relationships are essential. Accordingly, our draft report contains two planned recommendations to GSA and four to OMB. Specifically, the report recommends that GSA: ensure that goals and associated performance measures are outcome-oriented and that performance measures have targets, including performance measures and targets tied to fully recovering goals, performance measures, and targets for how the program will achieve its mission after September 2016; and assess actual results for each performance measure. The draft report also includes recommendations for OMB to: ensure that all goals and associated performance measures are outcome-oriented and that performance measures have targets; assess actual results for each performance measure; establish a time frame for developing the report identifying the highest priority projects, develop the report within that established time frame and on a quarterly basis thereafter, and consider the highest priority IT projects as part of the established process for prioritizing projects; and update USDS policy to clearly define the responsibilities and authorities governing the relationships between CIOs and the digital services teams and require existing agency digital service teams to address this policy. In doing so, the Federal Chief Information Officer should ensure that this policy is aligned with relevant federal law and OMB guidance on CIO responsibilities and authorities. If GSA implements our recommendations, it will be better positioned to effectively measure performance. Additionally, OMB’s implementation of our recommendations will position it to effectively measure performance, prioritize USDS resources, and ensure that CIOs play an integral role in agency digital service teams. Chairmen Meadows and Hurd, Ranking Members Connolly and Kelly, and Members of the Committees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you have any questions on matters discussed in this testimony, please contact David A. Powner at (202) 512-9286 or at pownerd@gao.gov. Other key contributors include Nick Marinos (Assistant Director), Kavita Daitnarayan, Rebecca Eyler, Kaelin Kuhn, Jamelyn Payan, and Tina Torabi. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In an effort to improve IT across the federal government, in March 2014 GSA established a team, known as 18F that provides IT services to agencies. In addition, in August 2014 the Administration established USDS, which aims to improve the federal IT services provided to citizens. OMB also required agencies to establish their own digital service teams. GAO was asked to summarize its draft report that (1) describes 18F and USDS efforts to address problems with IT projects and agencies' views of services provided, (2) assesses these programs' efforts against practices for performance measurement and project prioritization, and (3) assesses agency plans to establish their own digital service teams. In preparing the draft report on which this testimony is based, GAO reviewed 32 18F projects and 13 USDS projects that were underway or completed as of August 2015 and surveyed agencies about these projects; reviewed 18F and USDS in key performance measurement and project prioritization practices; reviewed 25 agencies' efforts to establish digital service teams; and reviewed documentation from four agencies, which were chosen based on their progress made in establishing digital service teams. In a draft report, GAO determined that the General Service Administration's (GSA) 18F and Office of Management and Budget's (OMB) U.S. Digital Service (USDS) have provided a variety of services to agencies supporting their information technology (IT) efforts. Specifically, 18F staff helped 18 agencies with 32 projects and generally provided development and consulting services, including software development solutions and acquisition consulting. In addition, USDS provided assistance on 13 projects across 11 agencies and generally provided consulting services, including quality assurance, problem identification and recommendations, and software engineering. Further, according to GAO's survey, managers were generally satisfied with the services they received from 18F and USDS on these projects (see table). Both 18F and USDS have partially implemented practices to identify and help agencies address problems with IT projects. Specifically, 18F has developed several outcome-oriented goals and related performance measures, as well as procedures for prioritizing projects; however, not all of its goals are outcome-oriented and it has not yet fully measured program performance. Similarly, USDS has developed goals, but they are not all outcome-oriented and it has established performance measures for only one of its goals. USDS has also measured progress for just one goal. Further, it has not fully implemented its procedures for prioritizing projects. Until 18F and USDS fully implement these practices, it will be difficult to hold the programs accountable for results. Agencies are beginning to establish digital service teams. Of the 25 agencies that requested funding for these teams, OMB has established charters with 6 agencies for their digital service teams. In addition, according to the USDS Deputy Administrator, USDS plans to establish charters with an additional 3 agencies by the end of the fiscal year—the Department of Education, as well as the Social Security Administration and Small Business Administration. For the remaining 16 agencies, as of April 2016, 8 agencies reported that they plan to establish digital service teams but have yet to establish charters with USDS. The other 8 agencies reported that they do not plan to establish digital service teams by September 2016 because they did not receive requested funding. Further, of the four agencies GAO selected to review, only one has defined the relationship between its digital service team and the agency Chief Information Officer (CIO). This is due, in part, to the fact that USDS policy does not describe the expected relationship between CIOs and these teams. Until OMB updates its policy and ensures that the responsibilities between the CIOs and digital services teams are clearly defined, it is unclear whether CIOs will be able to fulfill their statutory responsibilities with respect to IT management of the projects undertaken by the digital service teams. GAO's draft report includes two recommendations to GSA and three recommendations to OMB to improve goals and performance measurement. In addition, GAO's draft report is recommending that OMB update USDS policy to define the relationships between CIOs and digital services teams.
Our work has shown that DHS and its component agencies—particularly the Coast Guard and CBP—have made substantial progress in implementing various programs that, collectively, have improved maritime security. In general, our maritime security-related work has addressed four areas: (1) national and port-level security planning, (2) port facility and vessel security, (3) maritime domain awareness and information sharing, and (4) international supply chain security. Detailed examples of progress in each of these four areas are discussed below. The federal government has made progress in national and port-level security planning by, for example, developing various maritime security strategies and plans, and conducting exercises to test these plans. Developing national-level security strategies: The federal government has made progress developing national maritime security plans. For example, the President and the Secretaries of Homeland Security, Defense, and State approved the National Strategy for Maritime Security and its supporting plans in 2005. The strategy has eight supporting plans that are intended to address the specific threats and challenges of the maritime environment, such as maritime commerce security. We reported in June 2008 that these plans were generally well developed and, collectively, included desirable characteristics, such as (1) purpose, scope, and methodology; (2) problem definition and risk assessment; (3) organizational roles, responsibilities, and coordination; and (4) integration and implementation. Including these characteristics in the strategy and its supporting plans can help the federal government enhance maritime security. For example, better problem definition and risk assessment provide greater latitude to responsible parties for developing approaches that are tailored to the needs of their specific regions or sectors. In addition, in April 2008 DHS released its Small Vessel Security Strategy, which identified the gravest risk scenarios involving the use of small vessels for launching terrorist attacks, as well as specific goals where efforts can achieve the greatest risk reduction across the maritime domain. Developing port-level security plans: The Coast Guard has developed Area Maritime Security Plans (AMSP) around the country to enhance the security of domestic ports. AMSPs, which are developed by the Coast Guard with input from applicable governmental and private entities, serve as the primary means to identify and coordinate Coast Guard procedures related to prevention, protection, and security response. Implementing regulations for MTSA specified that these plans include, among other things, (1) operational and physical security measures that can be intensified if security threats warrant it; (2) procedures for responding to security threats, including provisions for maintaining operations at domestic ports; and (3) procedures to facilitate the recovery of the maritime transportation system after a security incident. We reported in October 2007 that to assist domestic ports in implementing the AMSPs, the Coast Guard provided a common template that specified the responsibilities of port stakeholders. Further, the Coast Guard has established Area Maritime Security Committees—forums that involve federal and nonfederal officials who identify and address risks in a port—to, among other things, provide advice to the Coast Guard for developing the associated AMSPs. These plans provide a framework for communication and coordination among port stakeholders and law enforcement officials and identify and reduce vulnerabilities to security threats throughout the port area. Exercising security plans: DHS has taken a number of steps to exercise its security plans. The Coast Guard and the Area Maritime Security Committee are required to conduct or participate in exercises to test the effectiveness of AMSPs at least once each calendar year, with no more than 18 months between exercises. These exercises are designed to continually improve preparedness by validating information and procedures in the AMSPs, identifying strengths and weaknesses, and practicing command and control within an incident command/unified command framework. To aid in this effort, the Coast Guard initiated the Area Maritime Security Training and Exercise Program in October 2005. This program is designed to involve all port stakeholders in the implementation of the AMSPs. Our prior work has shown that the Coast Guard has exercised these plans and that, since development of the AMSPs, all Area Maritime Security Committees have participated in a port security exercise. Lessons learned from the exercises are incorporated into plans, which Coast Guard officials said lead to planning process improvements and better plans. In addition to developing security plans, DHS has taken a number of actions to identify and address the risks to port facilities and vessels by conducting facility inspections and screening and boarding vessels, among other things. Requiring facility security plans and conducting inspections: To enhance the security of port facilities, the Coast Guard has implemented programs to require port facility security plans and to conduct annual inspections of the facilities. Owners and operators of certain maritime facilities are required to conduct assessments of security vulnerabilities, develop security plans to mitigate these vulnerabilities, and implement measures called for in their security plans. Coast Guard guidance calls for at least one announced and one unannounced inspection each year to ensure that security plans are being followed. We reported in February 2008, on the basis of these inspections, the Coast Guard had identified and corrected port facility deficiencies. For example, the Coast Guard identified deficiencies in about one-third of the port facilities inspected from 2004 through 2006, with deficiencies concentrated in certain categories, such as failing to follow facility security plans for port access control. In addition to inspecting port facilities, the Coast Guard also conducts inspections at offshore facilities, such as oil rigs. Requiring the development of these security plans and inspecting facilities to correct deficiencies helps the Coast Guard mitigate vulnerabilities that could be exploited by those with the intent to kill people, cause environmental damage, or disrupt transportation systems and the economy. Issuing facility access cards: DHS and its component agencies have made less progress in controlling access to secure areas of port facilities and vessels. To control access to these areas, DHS was required by MTSA to, among other things, issue a transportation worker identification credential that uses biometrics, such as fingerprints. TSA had already initiated a program to create an identification credential that could be used by workers in all modes of transportation when MTSA was enacted. This program, called the Transportation Worker Identification Credential (TWIC) program, is designed to collect personal and biometric information to validate workers’ identities and to conduct background checks on transportation workers to ensure they do not pose a threat to security. We reported in November 2009 that TSA, the Coast Guard, and the maritime industry took a number of steps to enroll 1,121,461 workers in the TWIC program, or over 93 percent of the estimated 1.2 million potential users, by the April 15, 2009, national compliance deadline. However, as discussed later in this statement, internal control weaknesses governing the enrollment, background check process, and use of these credentials potentially limit the program’s ability to provide reasonable assurance that access to secure areas of MTSA- regulated facilities is restricted to qualified individuals. Administering the Port Security Grant Program: DHS has taken steps to improve the security of port facilities by administering the Port Security Grant Program. To help defray some of the costs of implementing security at ports around the United States, this program was established in January 2002 when TSA was appropriated $93.3 million to award grants to critical national seaports. MTSA codified the program when it was enacted in November 2002. The Port Security Grant Program awards funds to states, localities, and private port operators to strengthen the nation’s ports against risks associated with potential terrorist attacks. We reported in November 2011 that, for fiscal years 2010 and 2011, allocations of these funds were based on DHS’s risk model and implementation decisions, and were made largely in accordance with risk. For example, we found that allocations of funds to port areas were highly positively correlated to port risk, as calculated by DHS’s risk model. Reviewing vessel plans and conducting inspections: To enhance vessel security, the Coast Guard has taken steps to help vessel owners and operators develop security plans and the Coast Guard regularly inspects these vessels for compliance with the plans. MTSA requires certain vessel owners and operators to develop security plans, and the Coast Guard is to approve these plans. Vessel security plans are to designate security officers; include information on procedures for establishing and maintaining physical security, passenger and cargo security, and personnel security; describe training and drills, and identify the availability of appropriate security measures necessary to deter transportation security incidents, among other things. The Coast Guard took several steps to help vessel owners and operators understand and comply with these requirements. In particular, the Coast Guard (1) issued updated guidance and established a “help desk” to provide stakeholders with a single point of contact, both through the Internet and over the telephone; (2) hired contractors to provide expertise in reviewing vessel security plans; and (3) conducts regular inspections of vessels. For example, we reported in December 2010 that, according to Coast Guard officials, the Coast Guard is to inspect ferries four times per year. The annual security inspection, which may be combined with a safety inspection and typically occurs when the ferry is out of service, and the quarterly inspections, which are shorter in duration, and generally take place while the ferry remains in service. During calendar years 2006 through 2009, the most recent years for which we have data, the Coast Guard reports that it conducted over 1,500 ferry inspections.enhanced vessel security. These security plan reviews and inspections have Conducting vessel crew screenings: To enhance the security of port facilities, both CBP and the Coast Guard receive and screen advance information on commercial vessels and their crew before they arrive at U.S. ports and assess risks based on this information. Among the risk factors considered in assessing each vessel and crew member are whether the vessel operator has had past instances of invalid or incorrect crew manifest lists, whether the vessel has a history of seafarers unlawfully landing in the United States, or whether the vessel is making its first arrival at a U.S. seaport within the past year. The Coast Guard may also conduct armed security boardings of arriving commercial vessels based on various factors, including the intelligence it received to examine crew passports and visas, among other things, to ensure the submitted crew lists are accurate. Conducting vessel escorts and boardings: The Coast Guard escorts and boards certain vessels to help ensure their security. The Coast Guard escorts a certain percentage of high capacity passenger vessels—cruise ships, ferries, and excursion vessels—to protect against an external threat, such as a waterborne improvised explosive device. The Coast Guard has provided escorts for cruise ships to help prevent waterside attacks and has also provided a security presence on passenger ferries during their transit. Further, the Coast Guard has conducted energy commodity tanker security activities, such as security boardings, escorts, and patrols. Such actions enhance the security of these vessels. DHS has worked with its component agencies to increase maritime domain awareness and taken steps to (1) conduct risk assessments, (2) establish area security committees, (3) implement a vessel tracking system, and (4) better share information with other law enforcement agencies through interagency operations centers. Conducting risk assessments: Recognizing the shortcomings of its existing risk-based models, in 2005 the Coast Guard developed and implemented the Maritime Security Risk Assessment Model (MSRAM) to better assess risks in the maritime domain. We reported in November 2011 that MSRAM provides the Coast Guard with a standardized way of assessing risk to maritime infrastructure, such as chemical facilities, oil refineries, and ferry and cruise ship terminals, among others. Coast Guard units throughout the country use this model to improve maritime domain awareness and better assess security risks to key maritime infrastructure. Establishing Area Maritime Security Committees: To facilitate information sharing with port partners and in response to MTSA,Coast Guard has established Area Maritime Security Committees. These committees are typically composed of members from federal, state, and local law enforcement agencies; maritime industry and labor organizations; and other port stakeholders that may be affected by security policies. An Area Maritime Security Committee is responsible for, among other things, identifying critical infrastructure and operations, identifying risks, and providing advice to the Coast Guard for developing the associated AMSP. These committees provide a structure that improves information sharing among port stakeholders. Developing vessel tracking systems: The Coast Guard relies on a diverse array of systems operated by various entities to track vessels and provide maritime domain awareness. For tracking vessels at sea, the Coast Guard uses a long-range identification and tracking system and a commercially provided long-range automatic identification system.waterways, and ports, the Coast Guard operates a land-based automatic identification system and also obtains information from radar and cameras in some ports. In addition, in July 2011, CBP developed the Small Vessel Reporting System to better track small boats arriving from foreign locations and deployed this system to eight field locations. Among other things, this system is to allow CBP to For tracking vessels in U.S. coastal areas, inland identify potential high-risk small boats to better determine which need to be boarded. Establishing interagency operations centers: DHS and its component agencies have made limited progress in establishing interagency operations centers. The Coast Guard—in coordination with other federal, state, and local law enforcement agencies (port partners)—is working to establish interagency operations centers at its sectors throughout the country. These interagency operations centers are designed to, among other things, improve maritime domain awareness and the sharing of information among port partners. In October 2007, we reported that the Coast Guard was piloting various aspects of future interagency operations centers at its 35 existing command centers and working with multiple interagency partners to further their development. We further reported in February 2012 that DHS had also begun to support efforts to increase port partner participation and further interagency operations center implementation, such as facilitating the review of an interagency operations center management directive. However, as discussed later in this statement, despite the DHS assistance, the Coast Guard has experienced coordination challenges that have limited implementation of interagency operations centers. DHS and its component agencies have implemented a number of programs and activities intended to improve the security of the international supply chain, including: enhancing cargo screening and inspections, deploying new cargo screening technologies to better detect contraband, implementing programs to inspect U.S.-bound cargo at foreign ports, partnering with the trade industry, and engaging with international partners. Enhancing cargo screening and inspections: DHS has implemented several programs to enhance the screening of cargo containers in advance of their arrival in the United States. In particular, DHS developed a system for screening incoming cargo, called the Automated Targeting System. The Automated Targeting System is a computerized system that assesses information on each cargo shipment that is to arrive in the United States to assign a risk score. CBP officers then use this risk score, along with other information, such as the shipment’s contents, to determine which shipments to examine. In February 2003, CBP began enforcing new regulations about cargo manifests—called the 24 hour rule—that requires the submission of complete and accurate manifest information 24 hours before a container is loaded onto a U.S.-bound vessel at a foreign port. To enhance CBP’s ability to target high-risk shipments, the SAFE Port Act required CBP to collect additional information related to the movement of cargo to better identify high- risk cargo for inspection. In response to this requirement, in 2009, CBP implemented the Importer Security Filing and Additional Carrier Requirements, collectively known as the 10+2 rule. The cargo information required by the 10+2 rule comprises 10 data elements from importers, such as country of origin, and 2 data elements from vessel carriers, such as the position of each container transported on a vessel (or stow plan), that are to be provided to CBP in advance of arrival of a shipment at a U.S. port. These additional data elements can enhance maritime security. For example, during our review of CBP’s supply chain security efforts in 2010, CBP officials stated that access to vessel stow plans has enhanced their ability to identify containers that are not correctly listed on manifests that could potentially pose a security risk in that no information is known about their origin or contents. Deploying technologies: DHS technological improvements have been focused on developing and deploying equipment to scan cargo containers for nuclear materials and other contraband to better secure the supply chain. Specifically, to detect nuclear materials, CBP, in coordination with DNDO, has deployed over 1,400 radiation portal monitors at U.S. ports of entry.are installed in primary inspection lanes through which nearly all traffic and shipping containers must pass. These monitors alarm when they detect radiation coming from a package, vehicle, or shipping container. CBP then conducts further inspections at its secondary inspection locations to identify the cause of the alarm and determine whether there is a reason for concern. Most of the radiation portal monitors Establishing the Container Security Initiative: CBP has enhanced the security of U.S.-bound cargo containers through its Container Security Initiative (CSI). CBP launched CSI in January 2002 and the initiative involves partnerships between CBP and foreign customs agencies in select countries to allow for the targeting and examination of U.S.-bound cargo containers before they reach U.S. ports. As part of this initiative, CBP officers use intelligence and automated risk assessment information to identify those U.S.-bound cargo shipments at risk of containing weapons of mass destruction or other terrorist contraband. We reported in January 2008 that through CSI, CBP has placed staff at 58 foreign seaports that, collectively, account for about 86 percent of the container shipments to the United States. According to CBP officials, the overseas presence of CBP officials has led to more effective information sharing between CBP and host government officials regarding targeting of U.S.-bound shipments. Partnering with the trade industry: CBP efforts to improve supply chain security include partnering with members of the trade industry. In an effort to strike a balance between the need to secure the international supply chain while also facilitating the flow of legitimate commerce, CBP developed and administers the Customs-Trade Partnership Against Terrorism program. The program is voluntary and enables CBP officials to work in partnership with private companies to review the security of their international supply chains and improve the security of their shipments to the United States. For example, participating companies develop security measures and agree to allow CBP to verify, among other things, that their security measures (1) meet or exceed CBP’s minimum security requirements and (2) are actually in place and effective. In return for their participation, members receive benefits, such as a reduced number of inspections or shorter wait times for their cargo shipments. CBP initiated the Customs-Trade Partnership Against Terrorism program in November 2001, and as of November 2010, the most recent date for which we had data, CBP had awarded initial certification—or acceptance of the company’s agreement to voluntarily participate in the program—to over 10,000 companies. During the course of a company’s membership, CBP security specialists observe and validate the company’s security practices. Thus, CBP is in a position to identify security changes and improvements that could enhance supply chain security. Achieving mutual recognition arrangements: CBP has actively engaged with international partners to define and achieve mutual recognition of customs security practices. For example, in June 2007, CBP signed a mutual recognition arrangement with New Zealand— the first such arrangement in the world—to recognize each other’s customs-to-business partnership programs, such as CBP’s Customs- Trade Partnership Against Terrorism. As of July 2012, CBP had signed six mutual recognition arrangements. Implementing the International Port Security Program: Pursuant to MTSA, the Coast Guard implemented the International Port Security Program in April 2004. Under this program, the Coast Guard and host nations jointly review the security measures in place at host nations’ ports to compare their practices against established security standards, such as the International Maritime Organization’s International Ship and Port Facility Security Code. Coast Guard teams have been established to conduct country visits, discuss security measures implemented, and collect and share best practices to help ensure a comprehensive and consistent approach to maritime security at ports worldwide. If a country is not in compliance, vessels from that country may be subject to delays before being allowed into the United States. According to Coast Guard documentation, the Coast Guard has visited almost all of the countries that have vessel traffic between them and the United States and attempts to visit countries at least annually to maintain a cooperative relationship. DHS and its component agencies have encountered a number of challenges in implementing programs and activities to enhance maritime security since the enactment of MTSA in 2002. In general, these challenges are related to (1) program management and implementation; (2) partnerships and collaboration; (3) resources, funding, and sustainability; and (4) performance measures. Many of our testimonies and reports in the last 10 years have cited these challenges and appendix I summarizes some of the key findings from those products. Examples of challenges in each of these four areas are detailed below. DHS and its component agencies have faced program management and implementation challenges in developing MTSA-related security programs, including a lack of adequate planning and internal controls, as well as problems with acquisition programs. Lack of planning: Given the urgency to take steps to protect the country against terrorism after the September 11, 2001 attacks, some of the actions taken by DHS and its component agencies used an “implement and amend” approach, which has negatively affected the management of some programs. For example, CBP quickly designed and rolled out CSI in January 2002. However, as we reported in July 2003, CBP initially did not have a strategic plan or workforce plan for this security program, which are essential to long-term success and accountability. As a result, CBP subsequently had to take actions to address these risks by, for example, developing CSI goals. The Customs-Trade Partnership Against Terrorism program experienced similar problems. For example, when the program was first implemented, CBP lacked a human capital plan. CBP has taken steps to address C-TPAT management and staffing challenges, including implementing a human capital plan. Lack of adequate internal controls: Several maritime security programs implemented by DHS and its component agencies did not have adequate internal controls. For example, we reported in May 2011 that internal controls over the TWIC program were not designed to provide reasonable assurance that only qualified applicants could acquire the credentials. During covert tests at several selected ports, our investigators were successful in accessing ports using counterfeit credentials and authentic credentials acquired through fraudulent means. As a result of our findings, DHS is in the process of assessing internal controls to identify needed corrective actions. In another example, we found that the Coast Guard did not have procedures in place to ensure that its field units conducted security inspections of offshore energy facilities annually in accordance with its guidance. In response to this finding, the Coast Guard has taken steps to update its inspections database to ensure inspections of offshore facilities are completed. Inadequate acquisitions management: DHS has also experienced challenges managing some of its acquisition programs. As discussed earlier, CBP coordinated with DNDO to deploy radiation detection monitors at U.S. ports of entry. However, we reported in June 2009 that DHS’s cost analysis of one type of device—the advanced spectroscopic portal radiation detection monitors—did not provide a sound analytical basis for DHS’s decision to deploy the devices. DNDO officials stated that they planned to update the cost-benefit analysis; however, after spending more than $200 million on the program, DHS announced, in February 2010, that it was scaling back its plans for development and use of the devices, and subsequently announced, in July 2011, that it was ending the program. DNDO was also involved in developing more advanced nonintrusive inspection equipment—the cargo advanced automated radiography system—in order to better detect nuclear materials that might be heavily shielded. In September 2010 we reported that DNDO was engaged in the research and development phase while simultaneously planning for the acquisition phase and pursued the acquisition and deployment of the radiography machines without fully understanding that the machines would not fit within existing inspection lanes at CBP ports of entry because it had not sufficiently coordinated the operating requirements with CBP.ended up canceling the acquisition and deployment phase of the program in 2007. DHS has improved how it collaborates with maritime security partners, but challenges in this area remain that stem from issues such as the launch of programs without adequate stakeholder coordination and problems inherent in working with a wide variety of stakeholders. Lack of port partner coordination: The Coast Guard experienced coordination challenges in developing its information-management and sharing system, called WatchKeeper, which is designed to enhance information sharing with law enforcement agencies and other partners. In particular, we found in February 2012 that the Coast Guard did not systematically solicit input from key federal, state, and local law enforcement agencies that are its port partners at the interagency operations centers, and that port partner involvement in the development of WatchKeeper requirements and the interagency operations center concept was primarily limited to CBP. As a result, this lack of port partner input has jeopardized such centers from meeting their intended purpose of improving information sharing and enhancing maritime domain awareness. We reported that the Coast Guard had begun to better coordinate with its port partners to solicit their input on WatchKeeper requirements, but noted that the Coast Guard still faced challenges in getting other port partners to use WatchKeeper as an information sharing tool. We further found that DHS did not initially assist the Coast Guard in encouraging other DHS components to use WatchKeeper to enhance information sharing. However, DHS had increased its involvement in the program so we did not make any recommendations relative to this issue. We did, however, recommend that the Coast Guard implement a more systematic process to solicit and incorporate port partner input to WatchKeeper and the Coast Guard has begun to take actions to address this recommendation. We believe, though, that it is too soon to tell if such efforts will be successful in ensuring that the interagency operations centers serve as more than Coast Guard–centric command and control centers. Challenges in coordinating with multiple levels of stakeholders: One example of challenges that DHS and its component agencies have faced with state, local, and tribal stakeholders concerns Coast Guard planning for Arctic operations. The Coast Guard’s success in implementing an Arctic plan rests in part on how successfully it communicates with key stakeholders—including the more than 200 Alaska native tribal governments and interest groups—but we found in September 2010 that the Coast Guard did not initially share plans with them. Coast Guard officials told us that they had been focused on communication with congressional and federal stakeholders and intended to share Arctic plans with other stakeholders once plans were determined. DHS agrees that it needs to communicate with additional stakeholders and has taken steps to do so. Difficulties in coordinating with other federal agencies: DHS has at times experienced challenges coordinating with other federal agencies to enhance maritime security. For example, we reported in September 2010 that federal agencies, including DHS, had collaborated with international and industry partners to counter piracy, but they had not implemented some key practices for enhancing and sustaining collaboration. Somali pirates have attacked hundreds of ships and taken thousands of hostages since 2007. As Somalia lacks a functioning government and is unable to repress piracy in its waters, the National Security Council—the President’s principal arm for coordinating national security policy among government agencies— developed the interagency Countering Piracy off the Horn of Africa: Partnership and Action Plan (Action Plan) in December 2008 to prevent, disrupt, and prosecute piracy off the Horn of Africa in collaboration with international and industry partners. According to U.S. and international stakeholders, the U.S. government has shared information with partners for military coordination. However, agencies have made less progress on several key efforts that involve multiple agencies—such as those to address piracy through strategic communications, disrupt pirate finances, and hold pirates accountable—in part because the Action Plan does not designate which agencies should lead or carry out 13 of the 14 tasks. We recommended that the National Security Council bolster interagency collaboration and the U.S. contribution to counterpiracy efforts by clarifying agency roles and responsibilities and encouraging the agencies to develop joint guidance to implement their efforts. In March 2011, a National Security Staff official stated that an interagency policy review will examine roles and responsibilities and implementation actions to focus U.S. efforts for the next several years. Difficulties in coordinating with private sector stakeholders: In some cases progress has been hindered because of difficulties in coordination with private sector stakeholders. For example, CBP program officials reported in 2010 that having access to Passenger Name Record data for cruise line passengers—such as a passenger’s full itinerary, reservation booking date, phone number, and billing information—could offer security benefits similar to those derived from screening airline passengers. However, CBP does not require this information from all cruise lines on a systematic basis because CBP officials stated that they would need further knowledge about the cruise lines’ connectivity capabilities to estimate the cost to both CBP and the cruise lines to obtain such passenger data. In April 2010, we recommended that CBP conduct a study to determine whether requiring cruise lines to provide automated Passenger Name Record data to CBP on a systematic basis would benefit homeland security. In July 2011, CBP reported that it had conducted site surveys at three ports of entry to assess the advantage of having cruise line booking data considered in a national targeting process, and had initial discussions with a cruise line association on the feasibility of CBP gaining national access to cruise line booking data. Limitations in working with international stakeholders: DHS and its component agencies face inherent challenges and limitations working with international partners because of sovereignty issues. For example, we reported in July 2010 that sovereignty concerns have limited the Coast Guard’s ability to assess the security of foreign ports. In particular, reluctance by some countries to allow the Coast Guard to visit their ports because of concerns over sovereignty was a challenge cited by Coast Guard officials who were trying to complete port visits under the International Port Security Program. According to the Coast Guard officials, before permitting Coast Guard officials to visit their ports, some countries insisted on visiting and assessing a sample of U.S ports. Similarly, we reported in April 2005 that CBP had developed a staffing model for CSI to determine staffing needs at foreign ports to implement the program, but was unable to fully staff some ports because of the need for host government permission, among other diplomatic and practical considerations. Economic constraints, such as declining revenues and increased security costs, have required DHS to make choices about how to allocate its resources to most effectively address human capital issues and sustain the programs and activities it has implemented to enhance maritime security. Human capital shortfalls: Human capital issues continue to pose a challenge to maritime security. For example, we reported in November 2011 that Coast Guard officials from 21 of its 35 sectors (60 percent) told us that limited staff time posed a challenge to incorporating MSRAM into strategic, operational, and tactical planning efforts. Similarly, Coast Guard officials responsible for conducting maritime facility inspections in 4 of the 7 sectors we visited to support our 2008 report on inspections said meeting all mission requirements for which they were responsible was or could be a challenge because of more stringent inspection requirements and a lack of inspectors, among other things. Officials in another sector said available staffing could adequately cover only part of the sector’s area of responsibility. Budget and funding constraints: Budget and funding decisions also affect the implementation of maritime security programs. For example, within the constrained fiscal environment that the federal government is operating, the Coast Guard has had to prioritize its activities and Coast Guard data indicate that some units are not able to meet self- imposed standards related to certain security activities—including boarding and escorting vessels. We reported in October 2007 that this prioritization of activities had also led to a decrease in resources the Coast Guard had available to provide technical assistance to foreign countries to improve their port security. To overcome this, Coast Guard officials have worked with other agencies, such as the Departments of Defense and State, and international organizations, such as the Organization of American States, to secure funding for training and assistance. Further, in the fiscal year 2013 budget, the Coast Guard will have less funding to sustain current assets needed for security missions so that more funds will be available for its top priority—long-term recapitalization of vessels. Another challenge that DHS and its component agencies have faced in implementing maritime security-related programs has been the lack of adequate performance measures. In particular, DHS has not always implemented standard practices in performance management. These practices include, among other things, collecting reliable and accurate data, using data to support missions, and developing outcome measures. Lack of reliable and accurate data: DHS and its component agencies have experienced challenges collecting complete, accurate, and reliable data. For example, in January 2011 we reported that both CBP and the Coast Guard tracked the frequency of illegal seafarer incidents at U.S. seaports, but the records of these incidents varied considerably among the two component agencies and between the agencies’ field and headquarters units. As a result, the data DHS used to inform its strategic and tactical plans were of undetermined reliability. We recommended that CBP and the Coast Guard determine why their data varied and jointly establish a process for sharing and reconciling records of illegal seafarer entries at U.S. seaports. DHS concurred and has made progress in addressing the recommendation. Another example of a lack of reliable or accurate data pertains to the Maritime Information for Safety & Law Enforcement database (MISLE). The MISLE database is the Coast Guard’s primary data system for documenting facility inspections and other activities, but flaws in this database have limited the Coast Guard’s ability to accurately assess these activities. For example, during the course of our 2011 review of security inspections of offshore energy infrastructure, we found inconsistencies in how offshore facility inspection results and other data were recorded in MISLE. In July 2011, and partly in response to our review, the Coast Guard issued new MISLE guidance on documenting the annual security inspections of offshore facilities in MISLE and distributed this guidance to all relevant field units. While this action should improve accountability, the updated guidance does not address all of the limitations we noted with the MISLE database. Not using data to manage programs: DHS and its component agencies have not always had or used performance information to manage their missions. For example, work we completed in 2008 showed that Coast Guard officials used MISLE to review the results of inspectors’ data entries for individual maritime facilities, but the officials did not use the data to evaluate the facility inspection program overall. We found that a more thorough evaluation of the facility compliance program could provide information on, for example, the variations we identified between Coast Guard units in oversight approaches, the advantages and disadvantages of each approach, and whether some approaches work better than others. Lack of outcome-based performance measures: DHS and its component agencies have also experienced difficulties developing and using performance measures that focus on outcomes. Outcome- based performance measures describe the intended result of carrying out a program or activity. For example, although CBP had performance measures in place for its Customs-Trade Partnership Against Terrorism program, these measures focused on program participation and facilitating trade and travel and not on improving supply chain security, which is the program’s purpose. We recommended in July 2003, March 2005, and April 2008 that CBP develop outcome-based performance measures for this program. In response to our recommendations, CBP has identified measures to quantify actions required and to gauge Customs-Trade Partnership Against Terrorism’s impact on supply chain security. The Coast Guard has faced similar issues with developing and using outcome-based performance measures. For example, we reported in November 2011 that the Coast Guard developed a measure to report its performance in reducing maritime risk, but faced challenges using this measure to inform decisions. The Coast Guard has improved the measure to make it more valid and reliable and believes it is a useful proxy measure of performance, but notes that developing outcome-based performance measures is challenging because of limited historical data on maritime terrorist attacks. Given the uncertainties in estimating risk reduction, though, it is unclear if the measure will provide meaningful performance information with which to track progress over time. Similarly, FEMA has experienced difficulties developing outcome-based performance measures. For example, in November 2011 we reported that FEMA was developing performance measures to assess its administration of the Port Security Grant Program, but had not implemented measures to assess the program’s grant effectiveness. FEMA has taken initial steps to develop measures to assess the effectiveness of its grant programs, but it does not have a plan and related milestones for implementing measures specifically for the Port Security Grant Program. Without such performance measures it could be difficult for FEMA to effectively manage the process of assessing whether the program is achieving its stated purpose of strengthening critical maritime infrastructure against risks associated with potential terrorist attacks. We recommended that DHS develop a plan with milestones for implementing performance measures for the Port Security Grant Program. DHS concurred with the recommendation and stated that FEMA is taking actions to implement it. Mr. Chairman and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. This appendix provides information on select programs and activities that have been implemented in maritime security since enactment of the Maritime Transportation Security Act (MTSA) in 2002. The information includes an overview of each program or activity; obligations information, where available; a summary of key findings and recommendations from prior GAO work, if applicable; and a list of relevant GAO products. The Department of Homeland Security (DHS) is the lead federal agency responsible for implementing MTSA requirements and related maritime security programs. DHS relies on a number of its component agencies that have responsibilities related to maritime security, including the following: U.S. Coast Guard: The Coast Guard has primary responsibility for ensuring the safety and security of U.S. maritime interests and leading homeland security efforts in the maritime domain. U.S. Customs and Border Protection (CBP): CBP is responsible for the maritime screening of incoming commercial cargo for the presence of contraband, such as weapons of mass destruction, illicit drugs, or explosives, while facilitating the flow of legitimate trade and passengers. Transportation Security Administration (TSA): TSA has responsibility for managing the Transportation Worker Identification Credential (TWIC) program, which is designed to control the access of maritime workers to regulated maritime facilities. Domestic Nuclear Detection Office (DNDO): DNDO is responsible for acquiring and supporting the deployment of radiation detection equipment, including radiation portal monitors at U.S. ports of entry. Federal Emergency Management Agency (FEMA): FEMA is responsible for administering grants to improve the security of the nation’s highest risk port areas. This appendix is based primarily on GAO reports and testimonies issued from August 2002 through July 2012 related to maritime, port, vessel, and cargo security efforts of the federal government, and other aspects of implementing MTSA-related security requirements. The appendix also includes selected updates—conducted in August 2012—to the information provided in these previously-issued products on the actions DHS and its component agencies have taken to address recommendations made in these products and the obligations for key programs and activities through May 2012. The obligations information provided in this appendix represents obligations for certain maritime security programs and activities that we were able to identify from available agency sources, such as agency congressional budget justifications, budget in brief documents, and prior GAO products.maritime security. In some cases, information was not available because of agency reporting practices. For example, we were not able to determine obligations for many of the MTSA-related Coast Guard programs and activities because they are funded at the account level (i.e., operating expenses) rather than as specific line items. It does not represent the total amount obligated for While we were not able to identify obligations for every maritime security program and activity, many of the Coast Guard’s programs and activities in maritime security fall under its ports, waterways, and coastal security mission. Table 1 shows the reported budget authority for the Coast Guard’s ports, waterways, and coastal security mission for fiscal years 2004 through 2013. The remainder of the budget-related information contained in this appendix generally pertains to obligations. In several instances we obtained appropriations information when obligations information was not available. We were unable to obtain funding information for this strategy. The National Strategy for Maritime Security, published in September 2005, aimed to align all federal government maritime security programs and activities into a comprehensive and cohesive national effort involving appropriate federal, state, local, and private sector entities. Homeland Security Presidential Directive 13 (HSPD-13) directed the Secretaries of Defense and Homeland Security to lead a joint effort to draft a National Strategy for Maritime Security. In June 2008, we reported that the National Strategy for Maritime Security and the supporting plans that implement the strategy show that, collectively, the plans address four of the six desirable characteristics of an effective national strategy that we identified in 2004 and partially address the remaining two. The four characteristics that are addressed include: (1) purpose, scope, and methodology; (2) problem definition and risk assessment; (3) organizational roles, responsibilities, and coordination; and (4) integration and implementation. The two characteristics that are partially addressed are: (1) goals, objectives, activities, and performance measures and (2) resources, investments, and risk management. Specifically, only one of the supporting plans mentions performance measures and many of these measures are presented as possible or potential performance measures. However, in other work reported on in August 2007, we noted the existence of performance measures for individual maritime security programs. These characteristics are partially addressed primarily because the strategy and its plans did not contain information on performance measures and the resources and investments elements of these characteristics. The resources, investments, and risk management characteristic is also partially addressed. While the strategic actions and recommendations discussed in the maritime security strategy and supporting implementation plans constitute an approach to minimizing risk and investing resources, the strategy and seven of its supporting implementation plans did not include information on the sources and types of resources needed for their implementation. In addition, the national strategy and three of the supporting plans also lack investment strategies to direct resources to necessary actions. To address this, the working group tasked with monitoring implementation of the plans recommended that the Maritime Security Policy Coordination Committee—the primary forum for coordinating U.S. national maritime strategy—examine the feasibility of creating an interagency investment strategy for the supporting plans. We recognized that other documents were used for allocating resources and, accordingly, we did not make any recommendations. Maritime Security: Coast Guard Efforts to Address Port Recovery and Salvage Response. GAO-12-494R. Washington, D.C.: April 6, 2012. See page 4. National Strategy and Supporting Plans Were Generally Well-Developed and Are Being Implemented. GAO-08-672. Washington, D.C.: June 20, 2008. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-454. Washington, D.C.: August 17, 2007. See pages 108-109. Activities related to AMSPs are not specifically identified in the Coast Guard budget. Such activities fall under the Coast Guard’s ports, waterways and coastal security mission. See table 1 for the reported budget authority for that mission for fiscal years 2004 through 2013. Our work on AMSP showed progress and an evolution toward plans that were focused on preventing terrorism and included discussion regarding natural disasters with detailed information on plans for recovery after an incident. We reported in October 2007 that the Coast Guard developed guidance and a template to help ensure that all major ports had an original AMSP that was to be updated every 5 years. Our 2007 reports stated that there was a wide variance in ports’ natural disaster planning efforts and that AMSPs—limited to security incidents—could benefit from unified planning to include an all-hazards approach. In our March 2007 report on this issue, we recommended that DHS encourage port stakeholders to use existing forums for discussing all-hazards planning. The Coast Guard’s early attempts to set out the general priorities for recovery operations in its guidelines for the development of AMSPs offered limited instruction and assistance for developing procedures to address recovery situations. Our April 2012 report stated that each of the seven Coast Guard AMSPs that we reviewed had incorporated key recovery and salvage response planning elements as called for by legislation and Coast Guard guidance. Specifically, the plans included the roles and responsibilities of special recovery units, instructions for gathering key information on the status of maritime assets (such as bridges), identification of recovery priorities, and plans for salvage of assets following an incident. Maritime Security: Coast Guard Efforts to Address Port Recovery and Salvage Response. GAO-12-494R. Washington, D.C.: April 6, 2012. The SAFE Port Act: Status and Implementation One Year Later. GAO-08-126T. Washington, D.C.: October 30, 2007. Pages 12-14. Port Risk Management: Additional Federal Guidance Would Aid Ports in Disaster Planning and Recovery. GAO-07-412. Washington, D.C.: March 28, 2007. Activities related to port security exercises are not specifically identified in the Coast Guard budget. Such activities fall under the Coast Guard’s ports, waterways and coastal security mission. See table 1 for the reported budget authority for that mission for fiscal years 2004 through 2013. In January 2005, we reported that the Coast Guard had conducted many exercises and was successful in identifying areas for improvement—which is the purpose of such exercises. For example, Coast Guard port security exercises identified opportunities to improve incident response in the areas of communication, resources, coordination, and decision-making authority. Further, we reported that after-action reports were not being completed in a timely manner. We recommended that the Coast Guard review its actions for ensuring the timely submission of after-action reports on terrorism-related exercises and determine if further actions are needed. To address the issue of timeliness, the Coast Guard reduced the timeframe allowed for submitting an after-action report. All reports are now required to be reviewed, validated, and entered into the applicable database within 21 days of the end of an exercise or operation. In addition, our analysis of 26 after-action reports for calendar year 2006 showed an improvement in the quality of these reports in that each report listed specific exercise objectives and lessons learned. As a result of these improvements in meeting requirements for after action reports, the Coast Guard is in a better position to identify and correct barriers to a successful response to a terrorist threat. Our October 2011 report on offshore energy infrastructure stated that the Coast Guard had conducted exercises and taken corrective actions, as appropriate, to strengthen its ability to prevent a terrorist attack on an offshore facility. This included a national-level exercise that focused on, among other things, protecting offshore facilities in the Gulf of Mexico. The exercise resulted in more than 100 after-action items and, according to Coast Guard documentation, the Coast Guard had taken steps to resolve the majority of them and was working on the others. In August 2005, the Coast Guard and TSA initiated the Port Security Training Exercise Program. Additionally, the Coast Guard initiated its own Area Maritime Security Training and Exercise Program in October 2005. Both programs were designed to involve the entire port community in exercises. In 2006, the SAFE Port Act included several new requirements related to security exercises, such as establishing a Port Security Exercise Program and an improvement plan process that would identify, disseminate, and monitor the implementation of lessons learned and best practices from port security exercises (6 U.S.C. § 912). Maritime Security: Coast Guard Should Conduct Required Inspections of Offshore Energy Infrastructure. GAO-12-37. Washington, D.C.: October 28, 2011. See pages 17-18 and 48-49. The SAFE Port Act: Status and Implementation One Year Later. GAO-08-126T. Washington, D.C.: October 30, 2007. See pages 14-15. Homeland Security: Process for Reporting Lessons Learned from Seaport Exercises Needs Further Attention. GAO-05-170, January 14, 2004. Activities related to maritime facility security plans are not specifically identified in the Coast Guard budget. Such activities fall under the Coast Guard’s ports, waterways and coastal security mission. See table 1 for the reported budget authority for that mission for fiscal years 2004 through 2013. Our work on this issue found that the Coast Guard has made progress by generally requiring maritime facilities to develop security plans and conducting required annual inspections. We also reported that the Coast Guard’s inspections were identifying and correcting facility deficiencies. For example, in February 2008, we reported that the Coast Guard identified deficiencies in about one-third of the facilities inspected from 2004 through 2006, with deficiencies concentrated in certain categories, such as failing to follow facility security plans for access control. Our work also found areas for improvement as well. For example, in February 2008 we made recommendations to help ensure effective implementation of MTSA-required facility inspections. For example, we recommended that the Coast Guard reassess the number of inspections staff needed, among other things. In response, the Coast Guard took action to implement these recommendations. In our October 2011 report on inspections of offshore energy facilities, we noted that the Coast Guard had taken actions to help ensure the security of offshore energy facilities, such as developing and reviewing security plans, but faced difficulties ensuring that all facilities complied with requirements. We recommended that the Coast Guard develop policies or guidance to ensure that annual security inspections are conducted and information entered into databases is more useful for management. The Coast Guard concurred with these recommendations and stated that it plans to update its guidance and improve its inspection database in 2013. Maritime Security: Coast Guard Should Conduct Required Inspections of Offshore Energy Infrastructure. GAO-12-37. Washington, D.C.: October. 28, 2011. Maritime Security: The SAFE Port Act: Status and Implementation One Year Later. GAO-08-126T. Washington D.C.: October 30, 2007. See pages 19-21. Maritime Security: Coast Guard Inspections Identify and Correct Facility Deficiencies, but More Analysis Needed of Program's Staffing, Practices, and Data. GAO-08-12. Washington D.C.: February 14, 2008. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-454. Washington D.C.: August 17, 2007. See page 110. Maritime Security: Substantial Work Remains to Translate New Planning Requirements to Effective Port Security. GAO-04-838. Washington, D.C.: June 30, 2004. Activities related to vessel security plans are not specifically identified in the Coast Guard budget. Such activities fall under the Coast Guard’s ports, waterways and coastal security mission. See table 1 for the reported budget authority for that mission for fiscal years 2004 through 2013. According to the Coast Guard, as of June 2004 there were almost 10,000 vessels operating in more than 300 domestic ports that were required to comply with these MTSA requirements. These maritime vessels, ranging from oil tankers and freighters to tugboats and passenger ferries, can be vulnerable on many security-related fronts and, therefore, must be able to restrict access to areas on board, such as the pilot house or other control stations critical to the vessels’ operation. We reported in June 2004 that the Coast Guard had identified and corrected deficiencies in vessel security plans, though the extent of review and approval of such plans varied widely. Our more recent vessel security work has focused on specific types of vessels—including ferries, cruise ships, and energy commodity tankers—and found that the Coast Guard has taken a number of steps to improve their security, such as screening vehicles and passengers on ferries. Our September 2010 report on piracy found that the Coast Guard had ensured that the security plans for U.S.-flagged vessels have been updated with piracy annexes if they transited high risk areas. Our work has also identified additional opportunities to enhance vessel security. For example, in 2010 we reported that the Coast Guard had not implemented recommendations from five agency contracted studies on ferry security and that the Coast Guard faced challenges protecting energy tankers. We made recommendations aimed at increasing security aboard vessels. In general DHS has concurred with these recommendations and is in the process of implementing them. Maritime Security: Ferry Security Measures Have Been Implemented, but Evaluating Existing Studies Could Further Enhance Security. GAO-11-207. Washington D.C.: December 3, 2010. The effect of the Coast Guard’s oversight of vessel security plans extends far beyond U.S. waters to high risk areas—such as the Horn of Africa—where piracy has surged in the last few years. For example, the Coast Guard ensures that the more than 100 U.S.-flagged vessels that travel through that region have updated security plans, and the Coast Guard checks for compliance when these vessels are at certain ports. Maritime Security: Actions Needed to Assess and Update Plan and Enhance Collaboration Among Partners Involved in Countering Piracy off the Horn of Africa. GAO-10-856. Washington D.C: September 30, 2010. See pages 57-59. Maritime Security: Varied Actions Taken to Enhance Cruise Ship Security, but Some Concerns Remain. GAO-10-400. Washington, D.C.: April 9, 2010. Maritime Security: Federal Efforts Needed to Address Challenges in Preventing and Responding to Terrorist Attacks on Energy Commodity Tankers. GAO-08- 141. Washington, D.C.: December 10, 2007. Maritime Security: Substantial Work Remains to Translate New Planning Requirements to Effective Port Security. GAO-04-838. Washington, D.C.: June 30, 2004. Activities related to small vessel security activities are not specifically identified in the Coast Guard budget. Such activities fall under the Coast Guard’s ports, waterways and coastal security mission. See table 1 for the reported budget authority for that mission for fiscal years 2004 through 2013. We reported in October 2010 that DHS—including the Coast Guard and CBP— and other entities are taking actions to reduce the risk from small vessels attacks. These actions include the development of the Small Vessel Security Strategy, community outreach, the establishment of security zones in U.S. ports and waterways, escorts of vessels that could be targeted for attack and port-level vessel tracking with radars and cameras since other vessel tracking systems— such as the Automatic Identification System—are only required on larger vessels. Our October 2010 work indicates, however, that the expansion of vessel tracking to all small vessels may be of limited utility because of, among other things, the large number of small vessels, the difficulty identifying threatening actions, and the challenges associated with getting resources on scene in time to prevent an attack once it has been identified. To enhance actions to address the small vessel threat DNDO has worked with the Coast Guard and local ports to develop and test equipment for detecting nuclear material on small maritime vessels. As part of our broader work on DNDO’s nuclear detection architecture, in January 2009 we recommended that DNDO develop a comprehensive plan for installing radiation detection equipment that would define how DNDO would achieve and monitor its goal of detecting the movement of radiological and nuclear materials through potential smuggling routes, such as small maritime vessels. DHS generally concurred with the recommendation and is in the process of implementing it. have roles in protecting against threats posed by small vessels. The Coast Guard is responsible for protecting the maritime region; CBP is responsible for keeping terrorists and their weapons out of the United States, securing and facilitating trade, and cargo container security; and DNDO is responsible for developing, acquiring, and deploying radiation detection equipment to support the efforts of DHS and other federal agencies. MTSA, and other legislation and directives, require that these component agencies protect the nation’s ports and waterways from terrorist attacks through a wide range of security improvements. Maritime Security: DHS Progress and Challenges in Key Areas of Port Security. GAO-10-940T. Washington, D.C.: July 21, 2010. See pages 7-10. Maritime Security: Vessel Tracking Systems Provide Key Information, but the Need for Duplicate Data Should Be Reviewed. GAO-09-337. Washington, D.C.: March 17, 2009. See pages 30-37. Nuclear Detection: Domestic Nuclear Detection Office Should Improve Planning to Better Address Gaps and Vulnerabilities. GAO-09-257. Washington, D.C.: January 29, 2009. See pages 18-23. Nuclear Detection: Preliminary Observations on the Domestic Nuclear Detection Office’s Efforts to Develop a Global Nuclear Detection Architecture. GAO-08- 999T Washington, D.C.: July 16, 2008. We reported in January 2011 that the federal government uses a multi-faceted strategy to address foreign seafarer risks. The State Department starts the process by reviewing seafarer applications for U.S. visas. As part of this process, consular officers review applications, interview applicants’, screen applicant information against federal databases, and review supporting documents to assess whether the applicants pose a potential threat to national security, among other things. In addition, DHS and its component agencies conduct advance- screening inspections, assess risks, and screen seafarers. However, our work noted opportunities to enhance seafarer inspection methods. For example, in January 2011, we reported that CBP inspected all seafarers entering the United States, but noted that CBP did not have the technology to electronically verify the identity and immigration status of crews on board cargo vessels, thus limiting CBP’s ability to ensure it could identify fraudulent documents presented by foreign seafarers. We made several recommendations to, among other things, facilitate better understanding of the potential need and feasibility of expanding electronic verification of seafarers on board vessels and to improve data collection and sharing. In that same report we also noted discrepancies between CBP and Coast Guard data on illegal seafarer entries at domestic ports and we recommended that the two agencies jointly establish a process for sharing and reconciling such records. DHS concurred with our recommendations and is in the process of taking actions to implement them. For example, CBP met with the DHS Screening Coordination Office to determine risks associated with not electronically verifying foreign seafarers for admissibility. Further, DHS reported in July 2011 that CBP and the Coast Guard were working to assess the costs associated with deploying equipment to provide biometric reading capabilities on board vessels. A few countries account for a large share of arriving foreign seafarers, with the Philippines, India, and Russia supplying the most. According to the Coast Guard, approximately 80 percent of seafarers arriving by commercial vessel did so aboard passenger vessels, such as cruise ships. Maritime Security: Federal Agencies Have Taken Actions to Address Risks Posed by Seafarers, but Efforts Can Be Strengthened. GAO-11-195. Washington, D.C.: January 14, 2011. Activities related to MSRAM are not specifically identified in the Coast Guard budget. Such activities fall under the Coast Guard’s ports, waterways and coastal security mission. See table 1 for the reported budget authority for that mission for fiscal years 2004 through 2013. MSRAM provides the Coast Guard with a standardized way of assessing risk to maritime infrastructure, such as chemical facilities, oil refineries, hazardous cargo vessels, passenger ferries, and cruise ship terminals, among others. MSRAM calculates the risk of a terrorist attack based on scenarios—a combination of target and attack modes—in terms of threats, vulnerabilities, and consequences to more than 28,000 maritime targets. The model focuses on individual facilities and cannot model system impacts or more complex scenarios involving adaptive or intelligent adversaries. The Coast Guard also uses MSRAM as input into other DHS maritime security programs, such as FEMA’s Port Security Grant Program. Our work on MSRAM found that the Coast Guard’s risk management and risk assessment efforts have developed and evolved and that the Coast Guard has made progress in assessing maritime security risks using MSRAM. For example, our work in this area in 2005 found that the Coast Guard was ahead of other DHS components in establishing a foundation for using risk management. After the September 11, 2001 terrorist attacks, the Coast Guard greatly expanded the scope of its risk assessment activities. It conducted three major security assessments at ports, which collectively resulted in progress in understanding and prioritizing risks within a port. We also reported in July 2010 that by developing MSRAM, the Coast Guard had begun to address the limitations of its previous port security risk model. In our more recent work, we reported that MSRAM generally aligns with DHS risk assessment criteria, but noted that additional documentation and training could benefit MSRAM users. We made recommendations to the Coast Guard to strengthen MSRAM, better align it with risk management guidance, and facilitate its increased use across the agency. In general, the Coast Guard has concurred with our recommendations and has implemented some and taken actions to implement others. For example, the Coast Guard uses risk management to drive resource allocations across its missions and is in the process of making MSRAM available for external peer review. The Coast Guard expects to complete these actions later this year, Coast Guard: Security Risk Model Meets DHS Criteria, but More Training Could Enhance Its Use for Managing Programs and Operations. GAO-12-14. Washington, D.C: November 17, 2011. The Coast Guard Authorization Act of 2010 required the Coast Guard to make MSRAM available, in an unclassified version, on a limited basis to regulated vessels and facilities to conduct risk assessments of their own facilities and vessels (Pub. L. No. 111-281, § 827, 124 Stat. 2905, 3004-05). Maritime Security: DHS Progress and Challenges in Key Areas of Port Security. GAO-10-940T. Washington, D.C.: July 21, 2010. See pages 3-6. Risk Management: Further Refinements Needed To Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005. See pages 30-48. Activities related to AMSCs are not specifically identified in the Coast Guard budget. Such activities fall under the Coast Guard’s ports, waterways and coastal security mission. See table 1 for the reported budget authority for that mission for fiscal years 2004 through 2013. Our work in this area has noted that the Coast Guard has established AMSCs in major U.S. ports. We also reported in April 2005 that the AMSCs improved information sharing among port stakeholders, and made improvements in the timeliness, completeness, and usefulness of such information. The types of information shared included threats, vulnerabilities, suspicious activities, and Coast Guard strategies to protect port infrastructure. The AMSCs also served as a forum for developing Area Maritime Security Plans. While establishing AMSCs has increased information sharing among port stakeholders, our earlier work noted that the lack of federal security clearances for non-federal members of committees hindered some information sharing. To address this issue, we made recommendations to ensure that non-federal officials received needed security clearances in a timely manner. The Coast Guard agreed with our recommendations and has since taken actions to address them, including (1) distributing memos to field office officials clarifying their role in granting security clearances to AMSC members, (2) developing a database to track the recipients of security clearances, and (3) distributing an informational brochure outlining the security clearance process. According to the Coast Guard, it has organized 43 area maritime security committees, covering the nation’s 361 ports. Recommended members of AMSCs are a diverse array of port stakeholders to include federal, state and local agencies, as well as private sector entities to include terminal operators, yacht clubs, shipyards, marine exchanges, commercial fishermen, trucking and railroad companies, organized labor, and trade associations. Maritime Security: The SAFE Port Act: Status and Implementation One Year Later. GAO-08-126T. Washington, D.C.: October 30, 2007. See pages 8-11. Maritime Security: Information-Sharing Efforts are Improving, GAO-06-933T. Washington, D.C.: July 10, 2006. Maritime Security: New Structures Have Improved Information Sharing, but Security Clearance Processing Requires Further Attention. GAO-05-394. Washington, D.C.: April 15, 2005. The Coast Guard received $60 million in appropriations in fiscal year 2008 that Congress directed the Coast Guard to use to begin the process of establishing IOCs. The Coast Guard received an additional $14 million in congressionally- directed appropriations from fiscal years 2009 through 2012 to fund IOC implementation, for a total of $74 million in IOC funding since fiscal year 2008. The SAFE Port Act required the establishment of certain IOCs, and the Coast Guard Authorization Act of 2010 further specified that IOCs should provide, where practicable, for the physical collocation of the Coast Guard with its port partners, where practicable, and that IOCs should include information-management systems (46 U.S.C. § 70107A). Our work on IOCs found that they provided promise in improving maritime domain awareness and information sharing. The Departments of Homeland Security, Defense, and Justice all participated to some extent in three early prototype IOCs. These IOCs improved information sharing through the collection of real time operational information. Thus, IOCs can provide continuous information about maritime activities and directly involve participating agencies in operational decisions using this information. For example, agencies have collaborated in vessel boardings, cargo examinations, and enforcement of port security zones. In February 2012, however, we reported that the Coast Guard did not meet the SAFE Port Act’s deadline to establish IOCs at all high-risk ports within 3 years of enactment. This was due, in part because the Coast Guard was not appropriated funds to establish the IOCs in a timely manner and because the definition of a fully operational IOC was evolving during this period. As of October 2010—the most recent date for which we had data available—32 of the Coast Guard’s 35 sectors had made progress in implementing IOCs, but none of the IOCs had achieved full operating capability. In our February 2012 report, we made several recommendations to the Coast Guard to help ensure effective implementation and management of its WatchKeeper information sharing system, such as revising the integrated master schedule. DHS concurred with the recommendations, subject to the availability of funds. To facilitate IOC implementation and the sharing of information across IOC participants, the Coast Guard began implementing implemented a web- based information management and sharing system called WatchKeeper in 2005. Maritime Security: Coast Guard Needs to Improve Use and Management of Interagency Operations Centers. GAO-12-202. Washington, D.C.: February 13, 2012. Maritime Security: The SAFE Port Act: Status and Implementation One Year Later. GAO-08-126T. Washington, D.C.: October 30, 2007. See pages 8-11. Maritime Security: Information-Sharing Efforts are Improving, GAO-06-933T. Washington, D.C.: July 10, 2006. Maritime Security: New Structures have Improved Information Sharing, but Security Clearance Processing Requires Further Attention. GAO-05-394. Washington, D.C. April 15, 2005. Funding for vessel tracking is not specifically identified in the DHS budget and so we were not able to determine costs allocated for the program. In March 2009, however, we reported that the Coast Guard expected its long-range identification and tracking system, one element of vessel tracking, to cost $5.3 million in fiscal year 2009 and approximately $4.2 million per year after that. We also noted in that report that long-range automatic identification system technology, another vessel tracking effort, was not far enough along to know how much it would cost. MTSA included the first federal vessel tracking requirements to improve the nation’s security by mandating that certain vessels operate an automatic identification system—a tracking system used for identifying and locating vessels—while in U.S. waters (46 U.S.C. § 70114). MTSA also allowed for the development of a long-range automated vessel tracking system that would track vessels at sea based on existing onboard radio equipment and data communication systems that can transmit the vessel’s identity and position to rescue forces in the case of an emergency. Later, the Coast Guard and Maritime Transportation Act of 2004 amended MTSA to require the development of a long-range tracking system (46 U.S.C. § 70115). Our work on vessel tracking found that the Coast Guard has developed a variety of vessel tracking systems that provide information key to identifying high risk vessels and developing a system of security measures to reduce risks associated with them. We reported on the Coast Guard’s early efforts to develop a vessel information system, as well as more recent efforts to develop an automatic information system to track vessels at sea. Our work in the vessel tracking area showed opportunities for the Coast Guard to reduce costs and eliminate duplication. For example, in July 2004 we reported that some local port entities were willing to assume the expense and responsibility for automatic information tracking if they were able to use the data, along with the Coast Guard, for their own purposes. Further, in March 2009, we reported that the Coast Guard was using three different means to track large vessels at sea, resulting in potential duplication in information provided. As a result, we made several recommendations to reduce costs, including that the Coast Guard partner with local ports and analyze the extent to which duplicate information is needed to track large vessels. In general, the Coast Guard concurred with our recommendations and has taken steps to partner with local port entities and analyze the performance of vessel tracking systems. Maritime Security: Vessel Tracking Systems Provide Key Information, but the Need for Duplicate Data Should Be Reviewed. GAO-09-337. Washington, D.C.: March 17, 2009. Maritime Security: Partnering Could Reduce Federal Costs and Facilitate Implementation of Automatic Vessel Identification System. GAO-04-868. Washington, D.C.: July 23, 2004. Coast Guard: Vessel Identification System Development Needs to Be Reassessed. GAO-02-477. Washington, D.C.: May 24, 2002. Overall, DHS spent more than $280 million developing and testing the ASP program. The advanced spectroscopic portal (ASP) program was designed to develop and deploy a more advanced radiation portal monitor to detect and identify radioactivity coming from containers and trucks at seaports and land border crossings. From 2005 to 2011, DNDO was developing and testing the ASP and planned to use these machines to replace some of the currently deployed radiation portal monitors used by CBP at ports-of- entry for primary screening, as well as the handheld identification devices currently used by CBP for secondary screening. If they performed well, DNDO expected that the ASP could (1) better detect key threat material and (2) increase the flow of commerce by reducing the number of referrals for secondary inspections. However, ASPs cost significantly more than currently deployed portal monitors. We estimated in September 2008 that the lifecycle cost of each ASP (including deployment costs) was about $822,000, compared with about $308,000 for radiation portal monitors, and that the total program cost for DNDO’s latest plan for deploying radiation portal monitors— including ASPs—would be about $2 billion. In September 2007, we found that DNDO’s initial testing of the ASP were not an objective and rigorous assessment of the ASP’s capabilities. For example, DNDO used biased test methods that enhanced the performance of the ASP during testing. At the same time, DNDO did not use a critical CBP standard operating procedure for testing deployed equipment. We made several recommendations about improving the testing of ASPs which DNDO subsequently implemented. In May 2009, we reported that DNDO improved the rigor of its testing; however, this improved testing revealed that the ASPs had a limited ability to detect certain nuclear materials at anything more than light shielding levels. In particular, we reported that ASPs performed better than currently deployed radiation portal monitors in detecting nuclear materials concealed by light shielding, but differences in sensitivity were less notable when shielding was slightly below or above that level. In addition, further testing in CBP ports revealed too many false alarms for the detection of certain high-risk nuclear materials. According to CBP officials, these false alarms are very disruptive in a port environment in that any alarm for this type of nuclear material would cause CBP to take enhanced security precautions because such materials (1) could be used in producing an improvised nuclear device and (2) are rarely part of legitimate or routine cargo. In 2012, we reported that once ASP testing became more rigorous, these machines did not perform well enough to warrant deployment. Accordingly, DHS scaled back the program in 2010 and later cancelled the program in July 2012. Combating Nuclear Smuggling: DHS has Developed Plans for Its Global Nuclear Detection Architecture, but Challenges Remain in Deploying Equipment. GAO- 12-941T. Washington D.C: July 26, 2012. Combating Nuclear Smuggling: DHS Improved Testing of Advanced Radiation Detection Portal Monitors, but Preliminary Results Show Limits of the New Technology. GAO-09-655. Washington D.C.: May 21, 2009. Combating Nuclear Smuggling: DHS’s Program to Procure and Deploy Advanced Radiation Detection Portal Monitors Is Likely to Exceed the Department’s Previous Cost Estimates. GAO-08-1108R. Washington, D.C.: September 22, 2008. Combating Nuclear Smuggling: Additional Actions Needed to Ensure Adequate Testing of Next Generation Radiation Detection Equipment. GAO-07-1247T. Washington, D.C.: September 18, 2007. Obligations for this initiative are included with obligations for the Container Security Initiative, as shown in table 5 above. We reported in October 2009 that CBP and DOE have been successful in integrating images and radiological signatures of scanned containers onto a computer screen that can be reviewed remotely from the United States. They have also been able to use SFI as a test bed for new applications of existing technology, such as mobile radiation scanners. However, we reported in June 2008 that CBP has faced difficulties in implementing SFI due to challenges in host nation examination practices, performance measures, resource constraints, logistics, and technology limitations. We recommended in October 2009 that DHS, in consultation with the Secretaries of Energy and State, conduct cost- benefit and feasibility analyses and provide the results to Congress. CBP stated it does not plan to develop comprehensive cost estimates because SFI has been reduced to one port and it has no funds to develop such cost estimates. DHS and CBP have not performed a feasibility assessment of 100 percent scanning to inform Congress as to what cargo scanning they can do, so this recommendation has not yet been addressed. We will continue to monitor DHS and CBP actions that could address this recommendation. SFI was created, in part, due to statutory requirements. The SAFE Port Act requires that pilot projects be established at three ports to test the feasibility of scanning 100 percent of U.S.-bound containers at foreign ports (6 U.S.C. § 981). In August 2007, 2 months before the pilot began operations, the Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Act) was enacted, which requires, among other things, that by July 2012, 100 percent of all U.S.-bound cargo containers be scanned before being placed on a vessel at a foreign port, with possible extensions for ports under certain conditions (6 U.S.C. § 982(b)). Ultimately, CBP implemented SFI at six ports. Supply Chain Security: Container Security Programs Have Matured, but Uncertainty Persists over the Future of 100 Percent Scanning. GAO-12-422T. Washington, D.C.: February 7, 2012. See pages 15-19. Maritime Security: Responses to Questions for the Record. GAO-11-140R. Washington, D.C.: October 22, 2010. See pages 17-21. Supply Chain Security: Feasibility and Cost-Benefit Analysis Would Assist DHS and Congress in Assessing and Implementing the Requirement to Scan 100 Percent of U.S.-Bound Containers. GAO-10-12. Washington, D.C.: October 30, 2009. Logistical, technological, and other challenges prevented the participating ports from achieving 100 percent scanning and DHS and CBP have since reduced the scope of the SFI program from six ports to one. Further, in May 2012, the Secretary of Homeland Security issued a 2-year extension for all ports, thus delaying the implementation date for 100 percent scanning until July 2014. CBP Works with International Entities to Promote Global Customs Security Standards and Initiatives, but Challenges Remain. GAO-08-538. Washington, D.C.: August 15, 2008. See pages 31-34. Supply Chain Security: Challenges to Scanning 100 Percent of U.S.-Bound Cargo Containers. GAO-08-533T. Washington, D.C.: June 12, 2008. MRA are included in the Other International Programs budget line item, but there is no specific line item for these activities. As such, we were unable to determine MRA obligations information. Mutual recognition arrangements (MRAs) allow for the supply chain security-related practices and programs taken by the customs administration of one country to be recognized by the administration of another. As of July 2012, CBP has made such arrangements with five countries and an economic union as part of its efforts to partner with international organizations and develop supply chain security standards that can be implemented throughout the international community. In our work on international supply chain security we reported that CBP has recognized that the United States is no longer self-contained in security matters—either in its problems or its solutions. That is, the growing interdependence of nations necessitates that policymakers work in partnerships across national boundaries to improve supply chain security. We also reported that other countries are interested in developing customs-to-business partnership programs similar to CBP’s C-TPAT program. Other countries are also interested in bi-lateral or multi-lateral arrangements with other countries to mutually recognize each others’ supply chain container security programs. For example, officials within the European Union and elsewhere see the C-TPAT program as one potential model for enhancing global supply chain security. Thus, CBP has committed to promoting mutual recognition arrangements based on an international framework of standards governing customs and related business relationships in order to enhance global supply chain security. Our work on other programs indicated that CBP does not always have critical information on other countries’ customs examination procedures and practices, even at CSI ports where we have stationed officers. However, our reports to date have not made any specific recommendations related to mutual recognition arrangements. According to CBP, a network of mutual recognition could lead to greater efficiency in improving international supply chain security by, for example, reducing redundant examinations of cargo containers and avoiding the unnecessary burden of addressing different sets of requirements as a shipment moves throughout the global supply chain. CBP and other international customs officials see mutual recognition arrangements as providing a possible strategy for the CSI program (which includes stationing CBP officers abroad). As of July 2012, CBP had signed six mutual recognition arrangements. Supply Chain Security: Container Security Programs Have Matured, but Uncertainty Persists over the Future of 100 Percent Scanning. GAO-12-422T. Washington, D.C.: February 7, 2012. See pages 13-14. Supply Chain Security: CBP Works with International Entities to Promote Global Customs Security Standards and Initiatives, but Challenges Remain. GAO-08- 538. Washington, D.C.: August 15, 2008. See pages 23-31. Supply Chain Security: Examinations of High-Risk Cargo at Foreign Seaports Have Increased, but Improved Data Collection and Performance Measures Are Needed. GAO-08-187. Washington, D.C.: January 25, 2008. See pages 33-40. Activities related to the International Port Security Program are not specifically identified in the Coast Guard budget. Such activities fall under the Coast Guard’s ports, waterways and coastal security mission. See table 1 for the reported budget authority for that mission for fiscal years 2004 through 2013. The International Port Security Program (IPSP) provides for the Coast Guard and other countries’ counterpart agencies to visit and assess the implementation of security measures in each others’ ports against established security standards. The underlying assumption for the program is that the security of domestic ports also depends upon security at foreign ports where vessels and cargoes bound for the United States originate. MTSA required the Coast Guard to develop such a program to assess security measures in foreign ports and, among other things, recommend steps necessary to improve security measures in those ports. To address this requirement, the Coast Guard established the International Port Security Program in April 2004. Subsequently, in October 2006, the SAFE Port Act required the Coast Guard to reassess security measures at such foreign ports at least once every 3 years (46 U.S.C. §§ 70108, 70109). Our work on the International Port Security Program found that the Coast Guard had made progress in visiting and assessing port security in foreign ports. We reported in October 2007 that the Coast Guard had visited more than 100 countries and found that most of the countries had substantially implemented the ISPS code. The Coast Guard had also consulted with a contractor to develop a more risk-based approach to planning foreign country visits, such as incorporating information on corruption and terrorist activities levels within a country. The Coast Guard has made progress despite a number of challenges. For example, the Coast Guard has been able to alleviate challenges related to sovereignty concerns of some countries by including a reciprocal visit feature in which the Coast Guard hosts foreign delegations to visit U.S. ports and observe ISPS Code implementation in the United States. Another challenge program officials overcame was the lack of resources to improve security in poorer countries. Specifically, Coast Guard officials worked with other federal agencies (e.g., the Departments of Defense and State) and international organizations (e.g., the Organization of American States) to secure funding for training and assistance to poorer countries that need to strengthen port security efforts. In implementing the program, the Coast Guard uses the International Maritime Organization’s International Ship and Port Facility Security (ISPS) Code. This code serves as the benchmark by which it measures the effectiveness of a country’s antiterrorism measures in a port. Coast Guard teams conduct country visits, discuss implemented security measures, and collect and share best practices to help ensure a comprehensive and consistent approach to maritime security in ports worldwide. Maritime Security: DHS Progress and Challenges in Key Areas of Port Security. GAO-10-940T. Washington, D.C.: July 21, 2010. See pages 10-11. Maritime Security: The SAFE Port Act: Status and Implementation One Year Later. GAO-08-126T. Washington, D.C.: October 30, 2007. See pages 15-19. Information on Port Security in the Caribbean Basin. GAO-07-804R. Washington, D.C.: June 29, 2007. For questions about this statement, please contact Stephen L. Caldwell at (202) 512-9610 or caldwells@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Christopher Conrad (Assistant Director), Adam Anguiano, Aryn Ehlow, Allyson Goldstein, Paul Hobart, Amanda Kolling, Glen Levis, and Edwin Woodward. Additional contributors include Frances Cook, Tracey King, and Jessica Orr. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Ports, waterways, and vessels handle billions of dollars in cargo annually and an attack on this maritime transportation system could impact the global economy. November 2012 marks the 10-year anniversary of MTSA, which required a wide range of security improvements. DHS is the lead federal department responsible for implementing MTSA and it relies on its component agencies, such as the Coast Guard and CBP, to help implement the act. The Coast Guard is responsible for U.S. maritime security interests and CBP is responsible for screening arriving vessel crew and cargo. This testimony summarizes GAO's work on implementation of MTSA requirements over the last decade and addresses (1) progress the federal government has made in improving maritime security and (2) key challenges that DHS and its component agencies have encountered in implementing maritime security-related programs. GAO was unable to identify all related federal spending, but estimated funding for certain programs. For example, from 2004 through May 2012, CBP obligated over $390 million to fund its program to partner with companies to review the security of their supply chains. This statement is based on GAO products issued from August 2002 through July 2012, as well as updates on the status of recommendations made and budget data obtained in August 2012. GAO's work has shown that the Department of Homeland Security (DHS), through its component agencies, particularly the Coast Guard and U.S. Customs and Border Protection (CBP), have made substantial progress in implementing various programs that, collectively, have improved maritime security. In general, GAO's work on maritime security programs falls under four areas: (1) security planning, (2) port facility and vessel security, (3) maritime domain awareness and information sharing, and (4) international supply chain security. DHS has, among other things, developed various maritime security programs and strategies and has implemented and exercised security plans. For example, the Coast Guard has developed Area Maritime Security Plans around the country to identify and coordinate Coast Guard procedures related to prevention, protection, and security response at domestic ports. In addition, to enhance the security of U.S. ports, the Coast Guard has implemented programs to conduct annual inspections of port facilities. To enhance the security of vessels, both CBP and the Coast Guard receive and screen advance information on commercial vessels and their crews before they arrive at U.S. ports and prepare risk assessments based on this information. Further, DHS and its component agencies have increased maritime domain awareness and have taken steps to better share information by improving risk management and implementing a vessel tracking system, among other things. For example, in July 2011, CBP developed the Small Vessel Reporting System to better track small boats arriving from foreign locations and deployed this system to eight field locations. DHS and its component agencies have also taken actions to improve international supply chain security, including developing new technologies to detect contraband, implementing programs to inspect U.S.-bound cargo at foreign ports, and establishing partnerships with the trade industry community and foreign governments. Although DHS and its components have made substantial progress, they have encountered challenges in implementing initiatives and programs to enhance maritime security since the enactment of the Maritime Security Transportation Act (MTSA) in 2002 in the areas of: (1) program management and implementation; (2) partnerships and collaboration; (3) resources, funding, and sustainability; and (4) performance measures. For example, CBP designed and implemented an initiative that placed CBP staff at foreign seaports to work with host nation customs officials to identify high-risk, U.S.-bound container cargo, but CBP initially did not have a strategic or workforce plan to guide its efforts. Further, the Coast Guard faced collaboration challenges when developing and implementing its information management system for enhancing information sharing with key federal, state, and local law enforcement agencies because it did not systematically solicit input from these stakeholders. Budget and funding decisions have also affected the implementation of maritime security programs. For example, Coast Guard data indicate that some of its units are not able to meet self-imposed standards related to certain security activities--including boarding and escorting vessels. In addition, DHS has experienced challenges in developing effective performance measures for assessing the progress of its maritime security programs. For example, the Coast Guard developed a performance measure to assess its performance in reducing maritime risk, but has faced challenges using this measure to inform decisions. GAO has made recommendations to DHS in prior reports and testimonies to strengthen its maritime security programs. DHS generally concurred and has implemented or is in the process of implementing them.
Following a yearlong study, the Commercial Activities Panel in April 2002 reported its findings on competitive sourcing in the federal government. The report lays out 10 sourcing principles and several recommendations, which provide a roadmap for improving sourcing decisions across the federal government. Overall, the new Circular is generally consistent with these principles and recommendations. The Commercial Activities Panel held 11 meetings, including three public hearings in Washington, D.C.; Indianapolis, Indiana; and San Antonio, Texas. In these hearings, the Panel heard repeatedly about the importance of competition and its central role in fostering economy, efficiency, and continuous performance improvement. Panel members heard first-hand about the current process—primarily the cost comparison process conducted under OMB Circular A-76—as well as alternatives to that process. Panel staff conducted extensive additional research, review, and analysis to supplement and evaluate the public comments. Recognizing that its mission was complex and controversial, the Panel agreed that a supermajority of two-thirds of the Panel members would have to vote for any finding or recommendation in order for it to be adopted. Importantly, the Panel unanimously agreed upon a set of 10 principles it believed should guide all administrative and legislative actions in competitive sourcing. The Panel itself used these principles to assess the government’s existing sourcing system and to develop additional recommendations. A supermajority of the Panel agreed on a package of additional recommendations. Chief among these was a recommendation that public- private competitions be conducted using the framework of the Federal Acquisition Regulation (FAR). Although a minority of the Panel did not support the package of additional recommendations, some of these Panel members indicated that they supported one or more elements of the package, such as the recommendation to encourage high-performing organizations (HPO) throughout the government. Importantly, there was a good faith effort to maximize agreement and minimize differences among Panel members. In fact, changes were made to the Panel’s report and recommendations even when it was clear that some Panel members seeking changes were highly unlikely to vote for the supplemental package of recommendations. As a result, on the basis of Panel meetings and my personal discussions with Panel members at the end of our deliberative process, I believe the major differences among Panel members were few in number and philosophical in nature. Specifically, disagreement centered primarily on (1) the recommendation related to the role of cost in the new FAR-type process, and (2) the number of times the Congress should be required to act on the new FAR-type process, including whether the Congress should authorize a pilot program to test that process for a specific time period. As I noted previously, the new A-76 Circular is generally consistent with the Commercial Activities Panel’s sourcing principles and recommendations and, as such, provides an improved foundation for competitive sourcing decisions in the federal government. In particular, the new Circular permits: greater reliance on procedures contained in the FAR, which should result in a more transparent, simpler, and consistently applied competitive process, and source selection decisions based on tradeoffs between technical factors and cost. The new Circular also suggests potential use of alternatives to the competitive sourcing process, such as public-private and public-public partnerships and high-performing organizations. It is not, however, specific as to how and when these alternatives might be used. If effectively implemented, the new Circular should result in increased savings, improved performance, and greater accountability, regardless of the service provider selected. However, this competitive sourcing initiative is a major change in the way government agencies operate, and successful implementation of the Circular’s provisions will require that adequate support be made available to federal agencies and employees, especially if the time frames called for in the new Circular are to be achieved. Implementing the new Circular A-76 will likely be challenging for many agencies. GAO’s past work on the competitive sourcing program at the Department of Defense (DOD)— as well as agencies’ efforts governmentwide to improve acquisition, human capital, and information technology management—has identified practices that have either advanced these efforts or hindered them. The lessons learned from these experiences—especially those that demonstrate best competitive sourcing practices—could prove invaluable to agencies as they implement the provisions in the new Circular. A major challenge agencies will face will be meeting a 12-month limit for completing the standard competition process in the new Circular. This provision is intended to respond to complaints from all sides about the length of time taken to conduct A-76 cost comparisons—complaints that the Panel repeatedly heard in the course of its review. OMB’s new Circular states that standard competitions shall not exceed 12 months from public announcement (start date) to performance decision (end date). Under certain conditions, there may be extensions of no more than 6 months. The new Circular also states that agencies shall complete certain preliminary planning steps before a public announcement. We welcome efforts to reduce the time required to complete these studies. Even so, our studies of DOD competitive sourcing activities have found that competitions can take much longer than the time frames outlined in the new Circular. Specifically, DOD’s most recent data indicate that competitions take on average 25 months. It is not, however, clear how much of this time was needed for any planning that may now be outside the revised Circular’s time frame. In commenting on OMB’s November 2002 draft proposal, we recommended that the time frame be extended to perhaps 15 to 18 months overall, and that OMB ensure that agencies provide sufficient resources to comply with A-76. In any case, we believe additional financial and technical support and incentives will be needed for agencies as they attempt to meet these ambitious time frames. Another provision in the new Circular that may affect the timeliness of the process is the “phased evaluation” approach—one of four approaches for making sourcing selections. Under this approach, an agency evaluates technical merit and cost in two separate phases. In the first phase, offerors may propose alternate performance standards. If the agency decides that a proposed alternate standard is desirable, it incorporates the standard into the solicitation. All offerors may then submit revised proposals in response to the new standard. In the second phase, the agency selects the offeror who meets these new standards and offers the lowest cost. While not in conflict with the principles or recommendations of the Commercial Activities Panel, the approach, if used, may prove burdensome in implementation, given the additional step involved in the solicitation. DOD has been at the forefront of federal agencies in using the A-76 process. We have tracked DOD’s progress in implementing its A-76 program since the mid-to-late-1990s and have identified a number of challenges that hold important lessons that civilian agencies should consider as they implement their own competitive sourcing initiatives. Notably: competitions took longer than initially projected, costs and resources required for the competitions were selecting and grouping functions to compete was problematic, and determining and maintaining reliable estimates of savings was difficult. DOD’s experience and our work identifying best practices suggest that several key areas will need sustained attention and communication by senior leadership as agencies plan and implement their competitive sourcing initiatives. Basing goals and decisions on sound analysis and integrating sourcing with other management initiatives. Sourcing goals and targets should contribute to mission requirements and improved performance and be based on considered research and sound analysis of past activities. Agencies should consider how competitive sourcing relates to strategic management of human capital, improved financial performance, expanded reliance on electronic government, and budget and performance integration, consistent with the President’s Management Agenda. Capturing and sharing knowledge. The competition process is ultimately about promoting innovation and creating more economical, efficient, and effective organizations. Capturing and disseminating information on lessons learned and providing sufficient guidance on how to implement policies will be essential if this is to occur. Without effectively sharing lessons learned and sufficient guidance, agencies will be challenged to implement certain A-76 requirements. For example, calculating savings that accrue from A-76 competitions, as required by the new Circular, will be difficult or may be done inconsistently across agencies without additional guidance, which will contribute to uncertainties over savings. Building and maintaining agency capacity. Conducting competitions as fairly, effectively, and efficiently as possible requires sufficient agency capacity—that is, a skilled workforce and adequate infrastructure and funding. Agencies will need to build and maintain capacity to manage competitions, to prepare the in-house most-effective organization (MEO), and to oversee the work—regardless of whether the private sector or the MEO is selected. Building this capacity will likely be a challenge, particularly for agencies that have not been heavily invested in competitive sourcing previously. An additional challenge facing agencies in managing this effort will be doing so while addressing high- risk areas, such as human capital and contract management. In this regard, GAO has listed contract management at the National Aeronautics and Space Administration, the Department of Housing and Urban Development, and the Department of Energy as an area of high risk. With a likely increase in the number of public-private competitions and the requirement to hold accountable whichever sector wins, agencies will need to ensure that they have an acquisition workforce sufficient in numbers and abilities to administer and oversee these arrangements effectively. We recently initiated work to look at how agencies are implementing their competitive sourcing programs. Our prior work on acquisition, human capital, and information technology management—in particular, our work on DOD’s efforts to implement competitive sourcing—provides a strong knowledge base from which to assess agencies’ implementation of this initiative. Finally, an important issue for implementation of the new Circular A-76 is the right of in-house competitors to appeal sourcing decisions in favor of the private sector. The Panel heard frequent complaints from federal employees and their representatives about the inequality of protest rights. While both the public and the private sectors had the right under the earlier Circular to file appeals to agency appeal boards, only the private sector had the right, if dissatisfied with the ruling of the agency appeal board, to file a bid protest at GAO or in court. Under the previous version of the Circular, both GAO and the Court of Appeals for the Federal Circuit held that federal employees and their unions were not “interested parties” with the standing to challenge the results of A-76 cost comparisons. The Panel recommended that, in the context of improving to the federal government’s process for making sourcing decisions, a way be found to level the playing field by allowing in-house entities to file a protest at GAO, as private-sector competitors have been allowed to do. The Panel also viewed the protest process as one method of ensuring accountability to assure federal workers, the private sector, and the taxpayer that the competition process is working properly. The new Circular provides a right to “contest” a standard A-76 competition decision using procedures contained in the FAR for protests within the contracting agencies. The new Circular thus abolishes the A-76 appeal board process and instead relies on the FAR-based agency-level protest process. An important legal question is whether the shift from the cost comparisons under the prior Circular to the FAR-like public-private competitions under the new one means that the in-house MEO should be eligible to file a bid protest at GAO. If the MEO is allowed to protest, there is a second question: Who will speak for the MEO and protest in its name? To ensure that our legal analysis of these questions benefits from input from everyone with a stake in this important area, GAO posted a notice in the Federal Register on June 13, seeking public comment on these and several related questions. Responses are due by July 16, and we intend to review them carefully before reaching our legal conclusion. While the new Circular provides an improved foundation for competitive sourcing decisions, implementing this initiative will undoubtedly be a significant challenge for many federal agencies. The success of the competitive sourcing program will ultimately be measured by the results achieved in terms of providing value to the taxpayer, not the size of the in- house or contractor workforce or the number of positions competed to meet arbitrary quotas. Successful implementation will require adequate technical and financial resources, as well as sustained commitment by senior leadership to establish fact-based goals, make effective decisions, achieve continuous improvement based on lessons learned, and provide ongoing communication to ensure federal workers know and believe that they will be viewed and treated as valuable assets. - - - - - Mr. Chairman, this concludes my statement. I will be happy to answer any questions you or other Members of the Committee may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In May 2003, the Office of Management and Budget (OMB) issued a new Circular A-76--which sets forth the government's competitive sourcing process. Determining whether to obtain services in-house or through commercial contracts is an important economic and strategic decision for agencies, and the use of A-76 is expected to grow throughout the federal government. In the past, however, the A-76 process has been difficult to implement, and the impact on the morale of the federal workforce has been profound. Moreover, there have been concerns in both the public and private sectors about the timeliness and fairness of the process and the extent to which there is a "level playing field" for conducting public-private competitions. It was against this backdrop that the Congress enacted legislation mandating a study of the government's competitive sourcing process, which was carried out by the Commercial Activities Panel, which was chaired by the Comptroller General of the United States. This testimony focuses on how the new Circular addresses the Panel's recommendations with regard to providing a better foundation for competitive sourcing decisions, and the challenges agencies may face in implementing the new A-76. Overall, the new Circular is consistent with the principles and recommendations that the Commercial Activities Panel reported in April 2002, and should provide an improved foundation for competitive sourcing decisions in the federal government. In particular, the new Circular permits greater reliance on procedures in the Federal Acquisition Regulation--which should result in a more transparent and consistently applied competitive process--as well as source selection decisions based on tradeoffs between technical factors and cost. The new Circular also suggests potential use of alternatives to the competitive sourcing process, such as public-private and public-public partnerships and high-performing organizations. The new Circular should result in increased savings, improved performance, and greater accountability. However, this initiative is a major change in the way the government operates, and implementing the new Circular A-76 will likely be challenging for many agencies. A major challenge agencies will face will be meeting a 12-month limit for completing the standard competition process. This provision aims to respond to complaints about the length of time taken to conduct A-76 cost comparisons. However, GAO studies of competitive sourcing at the Department of Defense (DOD) have found that competitions can take much longer than 12 months. Other provisions in the new Circular may also prove burdensome in implementation. Lessons learned by DOD and other agencies as they initiate efforts to improve acquisition, human capital, and information technology management could prove invaluable as agencies implement the new A-76 provisions--especially those that demonstrate best competitive sourcing practices. Successful implementation of the Circular's provisions will also likely require additional financial and technical support and incentives.
This section describes crude oil export restrictions, the SPR, and recent trends in U.S. crude oil production and the petroleum refining industry. The export of domestically produced crude oil has generally been restricted since the 1970s. In particular, the Energy Policy and Conservation Act of 1975 (EPCA) led the Department of Commerce’s Bureau of Industry and Security (BIS) to promulgate regulations which require crude oil exporters to obtain a license. These regulations provide that BIS will issue licenses for the following crude oil exports: exports from Alaska’s Cook Inlet, exports to Canada for consumption or use therein, exports in connection with refining or exchange of SPR crude oil, exports of certain California crude oil up to 25,000 barrels per day, exports consistent with certain international energy supply exports consistent with findings made by the President under certain exports of foreign origin crude oil that has not been commingled with crude oil of U.S. origin. Other than for these exceptions, BIS considers export license applications for exchanges involving crude oil on a case-by-case basis, and BIS can approve them if it determines that the proposed export is consistent with the national interest and purposes of EPCA. In addition to BIS’s export controls, other statutes control the export of domestically produced crude oil depending on where it was produced and how it is transported. In these cases, BIS can approve exports only if the President makes the necessary findings under applicable laws. Some of the authorized exceptions, outlined above, are the result of such Presidential findings. According to NERA, no other major oil producing country currently restricts crude oil exports. BIS approved about 30 to 40 licenses to export domestic crude oil per year from fiscal years 2008 through 2010. The number of BIS approved licenses increased to 103 in fiscal year 2013. Meanwhile, crude oil exports increased from less than 30 thousand barrels per day in 2008 to 396 thousand barrels per day in June 2014—the highest level of exports since 1957. Nearly all domestic crude oil exports have gone to Canada. To help protect the U.S. economy from damage caused by crude oil supply disruptions, Congress authorized the SPR in 1975. The SPR is owned by the federal government and operated by DOE. The SPR is authorized to hold up to 1 billion barrels of crude oil and has the capacity to store 727 million barrels of crude oil in salt caverns located at sites in Texas and Louisiana. According to DOE, the SPR held crude oil valuded at almost $73 billion dollars as of May, 2014. From fiscal year 2000 through 2013, the federal government spent about $0.5 billion to purchase crude oil, and spent $2.5 billion for operations and maintenance of the reserve. The United States is a member of the IEA and has agreed, along with 28 other member nations, to maintain reserves of crude oil or petroleum products equaling 90 days of net imports and to release these reserves and reduce demand during oil supply disruptions. The 90-day reserve requirement can be made up of government reserves, such as the SPR, and inventory reserves held by private industry. Under conditions prescribed by the Energy Policy and Conservation Act, as amended, the President and the Secretary of Energy have discretion to authorize the release of crude oil from the SPR to minimize significant supply disruptions. In the event of a crude oil supply disruption, the SPR can supply the market by selling stored crude oil or trading crude oil in exchange for an equal quantity of crude oil plus an additional amount as a premium to be returned to the SPR in the future. When crude oil is released from the SPR, it flows through commercial pipelines or on waterborne vessels to refineries, where it is converted into gasoline and other petroleum products, and then transported to distribution centers for sale to the public. Reversing a decades-long decline, U.S. crude oil production has increased in recent years. According to EIA data, U.S. production of crude oil reached its highest level in 1970 and generally declined through 2008, reaching a level of almost one-half of its peak. During this time, the United States increasingly relied on imported crude oil to meet growing domestic energy needs. However, recent improvements in technologies have allowed producers to extract crude oil from shale formations that were previously considered to be inaccessible because traditional techniques did not yield sufficient amounts for economically viable production. In particular, the application of horizontal drilling techniques and hydraulic fracturing—a process that injects a combination of water, sand, and chemical additives under high pressure to create and maintain fractures in underground rock formations that allow crude oil and natural gas to flow—have increased U.S. crude oil and natural gas production. Monthly domestic crude oil production has increased from an average of about 5 million barrels per day in 2008 to about 8.4 million barrels per day in April 2014, an increase of almost 68 percent. As we previously found, the growth in U.S. crude oil production has lowered the cost of some domestic crude oils. For example, prices for West Texas Intermediate (WTI) crude oil—a domestic crude oil used as a benchmark for pricing—was historically about the same price as Brent, an international benchmark crude oil from the North Sea between Great Britain and the European continent. However, from 2011 through June 13, 2014, the price of WTI averaged $14 per barrel lower than Brent (see fig. 1). In 2014, prices for these benchmark crude oils narrowed somewhat, and WTI averaged $101 through June 13, 2014, while Brent averaged $109. The development of U.S. crude oil production has created some challenges for crude oil transportation infrastructure because some production has been in areas with limited linkages to refining centers. According to EIA, these infrastructure constraints have contributed to discounted prices for some domestic crude oils. Much of the crude oil currently produced in the United States has characteristics that differ from historic domestic production. Crude oil is generally classified according to two parameters: density and sulfur content. Less dense crude oils are known as “light,” while denser crude oils are known as “heavy.” Crude oils with relatively low sulfur content are known as “sweet,” while crude oils with higher sulfur content are known as “sour.” As shown in figure 1, according to EIA, production of new domestic crude oil has tended to be light oils. Specifically, according to EIA estimates about all of the 1.8 million barrels per day growth in production between 2011 and 2013 consisted of lighter sweet crude oils. EIA also forecasts that lighter crude oils will make up a significant portion of production growth in 2014 and 2015—about 60 percent. Light crude oil differs from the crude oil that many U.S. refineries are designed to process. Refineries are configured to produce transportation fuels and other products (e.g., gasoline, diesel, jet fuel, and kerosene) from specific types of crude oil. Refineries use a distillation process that separates crude oil into different fractions, or interim products, based on their boiling points, which can then be further processed into final products. Many refineries in the United States are configured to refine heavier crude oils, and have therefore been able to take advantage of historically lower prices of heavier crude oils. For example, in 2013, the average density of crude oil used at domestic refineries was 30.8 while nearly all of the increase in production in recent years has been lighter crude oil with a density of 35 or above. According to EIA, additional production of light crude oil over the past several years has been absorbed into the market through several mechanisms, but the capacity of these mechanisms to absorb further increases in light crude oil production may be limited in the future as follows: Reduced imports of similar grade crude oils: According to EIA, additional production of light oil in the past several years has primarily been absorbed by reducing imports of similar grade crude oils. Light crude oil imports fell from 1.7 million barrels per day in 2011 to 1 million barrels per day in 2013. There may be dwindling amounts of light crude oil imports that can be reduced in the future, according to EIA. Increased crude oil exports: As discussed above, crude oil exports have increased recently, from less than 30 thousand barrels per day in 2008 to 396 thousand barrels per day in June 2014. Continued increases in crude oil exports will depend, in part, on the extent of any relaxation of current export restrictions, according to EIA. Increased use of light crude oils at domestic refineries: Domestic refineries have increased the average gravity of crude oils that they refine. The average API gravity of crude oil used in U.S. refineries increased from 30.2 degrees in 2008 to 30.8 degrees in 2013. Continued shifts to use additional lighter crude oils at domestic refineries can be enabled by investments to relieve constraints associated with refining lighter crude oils at refineries that were optimized to refine heavier crude oils. Increased use of domestic refineries: In recent years, domestic refineries have been run more intensively, allowing the use of more domestic crude oils. Utilization—a measure of how intensively refineries are used that is calculated by dividing total crude oil and other inputs used at refineries by the amount refineries can process under usual operating conditions—increased from 86 percent in 2011 to 88 percent in 2013. There may be limits to further increases in utilization of refineries that are already running at high rates. The studies we reviewed and stakeholders we interviewed generally suggest some domestic crude oil prices would increase if crude oil export restrictions were removed, while consumer fuel prices could decrease, although the extent of consumer fuel price changes are uncertain and may vary by region. Studies we reviewed and most of the stakeholders we interviewed suggest that some domestic crude oil prices would increase if crude oil export restrictions were removed. As discussed above, increasing domestic crude oil production has resulted in lower prices of some domestic crude oils compared with international benchmark crude oils. Three of the studies we reviewed also said that, absent changes in crude oil export restrictions, the expected growth in crude oil production may not be fully absorbed by domestic refineries or through exports (where allowed), contributing to even wider differences in prices between some domestic and international crude oils. By removing the export restrictions, these domestic crude oils could be sold at prices closer to international prices, reducing the price differential and aligning the price of domestic crude oil with international benchmarks. While the studies we reviewed and most of the stakeholders we interviewed agree that domestic crude oil prices would increase if crude oil export restrictions were removed, stakeholders highlighted several factors that could affect the extent of price increases. The studies we reviewed made assumptions about these factors, and actual price implications of removing crude oil export restrictions may differ from those estimated in the studies depending on how export restrictions and market conditions evolve. Specifically, stakeholders raised the following three key uncertainties: Extent of future increases in crude oil production. As we recently found, forecasts anticipate increases in domestic crude oil production in the future, but the projections are uncertain and vary widely. Two of the studies and two stakeholders told us that, in the absence of exports, higher production of domestic light sweet crude oil would tend to increase the mismatch between such crude oils and the refining industry. In turn, one study indicated that a greater increase in production would increase the price effects of removing crude oil export restrictions. On the other hand, lower than anticipated production of such crude oil would lower potential price effects as the additional crude oil could more easily be absorbed domestically. Extent to which crude oil production increases can be absorbed. The domestic refining industry and exports to Canada have absorbed the increases in domestic crude oil production thus far, and one stakeholder told us the domestic refining industry could provide sufficient capacity to absorb additional future crude oil production. This stakeholder said that refineries have the capacity to refine another 400,000 barrels a day of light crude oil, some of which is not being used because of infrastructure or logistics constraints. The industry is planning to develop or is in the process of developing the capacity to process an additional 500,000 barrels a day of light crude oil, according to this stakeholder. The current capacity that is not being utilized plus capacity that is planned or in development would constitute a total capacity to refine 900,000 barrels per day of light crude oil. To the extent that light crude oil production increases by less than this amount, the gap in prices between WTI and Brent could close in the future as increased crude oil supplies are absorbed. This would reduce the extent to which domestic crude oil prices increase if crude oil export restrictions are removed. On the other hand, some stakeholders suggested that the U.S. refining industry will not be able to keep pace with increasing U.S. light crude oil production. For example, IHS stated that refinery investments to process additional light crude oil face significant risks in the form of potentially stranded investments if export restrictions were to change, and this could result in investments not being made as quickly as anticipated. Extent to which export restrictions change. Aspects of the export restrictions could be further defined or interpreted in ways that could change the pricing dynamics of domestic crude oil markets. Recently, two companies received clarification from the Department of Commerce that condensate—a type of light crude oil—that has been processed through a distillation tower is not considered crude oil and so not subject to export restrictions. One stakeholder stated that this may lead to more condensate exports than expected. Within the context of these uncertainties, estimates of potential price effects vary in the four studies we reviewed, as shown in table 1. Specifically, estimates in these studies of the increase in domestic crude oil prices due to removing crude oil export restrictions range from about $2 to $8 per barrel. For comparison, at the beginning of June 2014, WTI was $103 per barrel, and these estimates represent 2 to 8 percent of that price. In addition, NERA found that removing export restrictions would have no measurable effect in a case that assumes a low future international oil price of $70 per barrel in 2015 rising to less than $75 by 2035. According to NERA, current production costs are close to these values, so that removing export restrictions would provide little incentive to produce more light crude oil. The studies we reviewed and most of the stakeholders we interviewed suggest that consumer fuel prices, such as gasoline, diesel, and jet fuel, could decrease as a result of removing crude oil export restrictions. A decrease in consumer fuel prices could occur because they tend to follow international crude oil prices rather than domestic crude oil prices, according to the studies and most of the stakeholders. If domestic crude oil exports caused international crude oil prices to decrease, consumer fuel prices could decrease as well. Table 2 shows that the estimates of the price effects on consumer fuels vary in the four studies we reviewed. Price estimates range from a decrease of 1.5 to 13 cents per gallon. These estimates represent 0.4 to 3.4 percent of the average U.S. retail gasoline price at the beginning of June 2014. In addition, NERA found that removing export restrictions has no measurable effect on consumer fuel prices in a case that assumes a low future world crude oil price. The effect of removing crude oil export restrictions on domestic consumer fuel prices depends on several uncertain factors. First, it depends on the extent to which domestic versus international crude oil prices determine the domestic price of consumer fuels. Recent research examining the relationship between domestic crude oil and gasoline prices concluded that low domestic crude oil prices in the Midwest during 2011 did not result in lower gasoline prices in that region. This research supports the assumption made in all of the studies we reviewed that to some extent higher prices of some domestic crude oils as a result of removing crude oil export restrictions would not be passed on to consumer fuel prices. However, some stakeholders told us that this may not always be the case and that more recent or detailed data could show that lower prices for some domestic crude oils have influenced consumer fuel prices. Second, the extent to which domestic consumer fuel prices could decline also depends on how the global crude oil market responds to the domestic crude oil entering the market. In this regard, stakeholders highlighted several uncertainties. In particular, the response of the Organization of the Petroleum Exporting Countries (OPEC) could have a large influence on any international crude oil price changes. The projections in the RFF, IHS, and ICF International studies assumed that OPEC would not respond by attempting to counterbalance the effect of increased U.S. exports by reducing its countries’ exports. However, OPEC could seek to maintain international crude oil prices by pulling crude oil from the global market. In this case, the international crude oil price would not be affected by removing export restrictions, and consumer fuel prices would not decline. On the other hand, OPEC could increase production to maintain its large market share, which would push international crude oil prices and consumer fuel prices downward. NERA examined two alternative OPEC response cases, and found that gasoline prices would not generally be affected if OPEC reduces production, and that consumer fuel prices would decrease further if OPEC maintains its production in the face of lower global crude oil prices. In addition, one stakeholder questioned whether international crude oil prices would be affected by U.S. crude oil exports. Given the size of the global crude oil market, this stakeholder suggested that U.S. exports would have little to no effect on international crude oil prices. Third, two of the stakeholders we interviewed suggested that there could be important regional differences in consumer fuel price implications, and that prices could increase in some regions—particularly the Midwest and the Northeast—due to changing transportation costs and potential refinery closures. For example, two stakeholders told us that because of requirements to use more expensive U.S.-built, -owned, and -operated ships to move crude oil between U.S. ports, allowing exports could enable some domestic crude oil producers to ship U.S. crude oil for less cost to refineries in foreign countries. Specifically, representatives of one refiner told us that, if exports restrictions were removed, they could ship oil to their refineries in Europe at a lower cost than delivering the same oil to a refinery on the U.S. East Coast. According to another stakeholder, this could negatively affect the ability of some domestic refineries to compete with foreign refineries. Additionally, because refineries are currently benefiting from low domestic crude oil prices, some studies and stakeholders noted that refinery margins could be reduced if removing export restrictions increased domestic crude oil prices. As a result, some refineries could face an increased risk of closure, especially those located in the Northeast. As EIA reported in 2012, refinery closures in the Northeast could be associated with higher consumer fuel prices and possibly higher price volatility. However, according to one stakeholder, domestic refiners still have a significant cost advantage in the form of less expensive natural gas, which is an important energy source for many refineries. For this and other reasons, one stakeholder told us they did not anticipate refinery closures as a result of removing export restrictions. The studies we reviewed and stakeholders we interviewed generally suggest that removing crude oil export restrictions would increase domestic crude oil production and may affect the environment and the economy. Studies we reviewed and stakeholders we interviewed generally agree that removing crude oil export restrictions would increase domestic crude oil production. Monthly domestic crude oil production has increased by almost 68 percent since 2008—from an average of about 5 million barrels per day in 2008 to 8.3 million barrels per day in April 2014. Even with current crude oil export restrictions, given various scenarios, EIA projects that domestic production will continue to increase and could reach 9.6 million barrels per day by 2019. If export restrictions were removed, according to the four studies we reviewed, the increased prices of domestic crude oil are projected to lead to further increases in crude oil production. Projections of this increase varied in the studies we reviewed—from a low of an additional 130,000 barrels per day on average between 2015 and 2035, according to the ICF International study, to a high of an additional 3.3 million barrels per day on average between 2015 and 2035 in NERA’s study. This is equivalent to 1.5 percent to almost 40 percent of production in April 2014. One stakeholder we spoke with told us that, although domestic demand for crude oil is not expected to change, production will rise as a result of increased international demand, primarily from Asia. For example, according to EIA, India was the fourth-largest consumer of crude oil and petroleum products in the world in 2013, and the country’s dependence on imported crude oil continues to grow. Another stakeholder stated that removing export restrictions could lead to increased local and regional opposition to crude oil production if the crude oil was primarily for export, which could affect domestic production. Two of the studies we reviewed and most stakeholders we spoke with stated that the increased crude oil production that would result from removing the restrictions on crude oil exports may affect the environment. In September 2012, we found that crude oil development may pose certain inherent environmental and public health risks; however, the extent of the risk is unknown, in part, because the severity of adverse effects depend on various location- and process-specific factors, including the location of future shale oil and gas development and the rate at which it occurs, as well as geology, climate, business practices, and regulatory and enforcement activities. The stakeholders who raised concerns identified the following risks related to crude oil production, about which GAO has reported in the past: Water quality and quantity: Increased crude oil production, particularly from shale, could affect the quality and quantity of surface and groundwater sources, but the magnitude of such effects is unknown. In October 2010, we found that water is needed for a number of oil shale development activities, including constructing facilities, drilling wells, generating electricity for operations, and reclamation of drill sites. In 2012, we found that shale oil and gas development may pose a risk to surface water and groundwater because withdrawing water from streams, lakes, and aquifers for drilling and hydraulic fracturing could adversely affect water resources. For example, we found that groundwater withdrawal could affect the amount of water available for other uses, including public and private water supplies. One of the stakeholders we interviewed suggested that water withdrawal is already an important consideration, particularly for areas experiencing drought. For example, the stakeholder noted that crude oil production and associated water usage already has implications for the Edwards Aquifer, a groundwater system serving the agricultural, industrial, recreational, and domestic needs of almost two million users in south central Texas. In addition, removing export restrictions may affect water quality. Another stakeholder told us that allowing crude oil exports would lead to more water pollution as a result of increased production through horizontal drilling. Air quality: Increased crude oil production may increase greenhouse gases and other air emissions because the use of consumer fuels would increase, and also because the crude oil production process often involves the direct release of pollutants into the atmosphere (venting) or burning fuels (flaring). Two stakeholders told us that venting and flaring has escalated in North Dakota, in part because regulatory oversight and infrastructure have not kept pace with the recent surge in crude oil production in the state. In January 2014, the North Dakota Industrial Commission reported that nearly 30 percent of all natural gas produced in the state is flared. According to a 2013 report from Ceres, flaring in North Dakota in 2012 resulted in greenhouse gas emissions equivalent to adding 1 million cars to the road. Another stakeholder told us that allowing crude oil exports would lead to more air pollution as a result of increased production through horizontal drilling and hydraulic fracturing. RFF estimated the potential environmental effect of removing export restrictions, estimating that increases in crude oil production and consumption would increase carbon dioxide emissions worldwide by almost 22 million metric tons per year. By comparison, U.S. emissions from energy consumption totaled 5,393 million metric tons in 2013 according to EIA. NERA estimated that increased crude oil production and use of fossil fuels would increase greenhouse gas emissions by about 12 million metric tons of carbon dioxide equivalents per year on average from 2015 through 2035. Transportation challenges: Increased crude oil production could exacerbate transportation challenges. In March 2014, we found that domestic and Canadian crude oil production has created some challenges for U.S. crude oil transportation infrastructure. Some of the growth in crude oil production has been in areas with limited transportation to refining centers. To address this challenge, refiners have relied on rail to transport crude oil. According to data from the Surface Transportation Board, rail moved about 236,000 carloads of crude oil in 2012, which is 24 times more than the roughly 9,700 carloads moved in 2008. As we recently found, as the movement of crude oil by rail has increased incidents such as spills and fires involving crude oil trains have also increased—from 8 incidents in 2008 to 119 incidents in 2013 according to Department of Transportation data. Some stakeholders told us that removing export restrictions would increase the risk for crude oil spills by rail and other modes of transportation such as tankers. On the other hand, one stakeholder suggested that removing export restrictions could reduce the amount of crude oil transported by rail, in some instances, since the most economic way to export crude oil is by pipeline to a tanker. As a result, the number of rail accidents involving crude oil spills could decrease. The studies we reviewed suggest that removing crude oil export restrictions would increase the size of the economy. Three of the studies project that removing export restrictions would lead to additional investment in crude oil production and increases in employment. This growth in the oil sector would—in turn—have additional positive effects in the rest of the economy. For example, NERA projects an average of 230,000 to 380,000 workers would be removed from unemployment through 2020 if export restrictions were eliminated in 2015. These employment benefits largely disappear if export restrictions are not removed until 2020 because by then the economy will have returned to full employment. Potential implications for investment, public revenue, and trade are as follows: Investments: According to one of the studies we reviewed, removing export restrictions may lead to more investment in crude oil exploration and production, but this investment could be somewhat offset by less investment in the refining industry. As discussed previously, removing export restrictions is expected to increase domestic crude oil production. Private investment in drilling rigs, engineering services, and transportation and logistics facilities, for example, is needed to increase domestic crude oil production. According to IHS, this will directly benefit industries such as machinery, fabricated metals, steel, chemicals, and engineering services. At the same time, removing export restrictions may decrease investment in the refining industry because the industry would not need extensive additional investment to accommodate lighter crude oils. For example, one stakeholder told us that, under current export restrictions, refining additional light crude oils may require capital investment to remove processing constraints at refineries that are designed to process heavier crude oils. Officials from one refining company told us that they had invested a significant amount of capital to refine lighter oils. For example, the refinery installed two new distillation towers to process lighter crude oils at a cost of $800 million. Such investments may not be necessary if export restrictions were removed. Public revenue: Two of the studies we reviewed suggest that removing export restrictions would increase government revenues, although the estimates of the increase vary. One study estimated that total government revenue would increase by a combined $1.4 trillion in additional revenue from 2016 through 2030 while another study estimated that U.S. federal, state, and local tax receipts combined with royalties from drilling on federal lands could increase by an annual average of $3.9 to $5.7 billion from 2015 through 2035. Trade: According to the studies we reviewed, removing export restrictions would contribute to further declines in net petroleum (i.e., crude oil, consumer fuels, and other petroleum products) imports and reduce the U.S. trade deficit. Three of the studies we reviewed estimated the effect of removing export restrictions on net petroleum imports, with ICF projecting a decline in net imports of about 100,000 to 300,000 barrels per day; IHS projecting a decline, but not providing a specific estimate; and NERA projecting a decline of about 0.6 to 3.2 million barrels per day. Further, according to one study, removing export restrictions could also improve the U.S. trade balance because the light sweet crude oils are usually priced higher than heavy, sour crude oils. One study estimated that removing export restrictions could improve the trade balance (narrow the U.S. trade deficit) by $8 to $15 billion per year on average from 2015 through 2035. Another study estimated that removing crude oil export restrictions would improve the trade balance by $72 to $101 billion per year from 2016 through 2030. Changing market conditions—most importantly the significant increase in domestic production of crude oil from shale—have implications for the role of the SPR, including its appropriate size, location, and composition. DOE has taken some steps to reexamine the location and composition of the SPR in light of these changes, but has not recently reexamined its size. Recent and expected changes in crude oil markets have important implications for the role of the SPR, including its size, location, and composition. DOE has recognized that recent increases in domestic crude oil production and correlating reductions in crude oil imports have changed how crude oil is transported around the United States, and that these changes carry potential implications for the operation and maintenance of the SPR. As discussed above, removing crude oil export restrictions would be expected to increase domestic crude oil production and contribute to further declines in net imports. Our review of DOE documents, prior GAO work, and discussions with stakeholders highlight three primary implications for the SPR. Size: Increased domestic crude oil production and falling net petroleum imports may affect the ideal size of the SPR—how much the SPR should hold to optimize the benefits of protecting the economy from damage with the costs of holding the reserves. One measure of the economy’s vulnerability to oil supply disruptions is to assess net petroleum imports— imports minus exports. Net petroleum imports have declined from a peak of 60 percent of consumption in 2005 to about 30 percent in the first half of 2014. In 2006, net imports were expected to increase in the future, increasing the country’s reliance on foreign crude oil. However, imports have declined and, according to EIA’s most recent forecast, are expected to remain well below 2005 import levels into the future. (See fig. 3.) As discussed above, removing crude oil export restrictions would be expected to contribute to additional decreases in net petroleum imports in the future. To the extent that changes in net imports reflect changes in vulnerability, these and other changes in the economy may have reduced the nation’s vulnerability to supply disruptions. For example, a recent report by the President’s Council of Economic Advisers suggests that decreased domestic petroleum demand, increased domestic crude oil production, more fuel efficient vehicles, and increased use of biofuels, have each contributed to reducing the vulnerability of the nation’s economy to international crude oil supply disruptions. Although international crude oil supply and price volatility remains a risk, the report suggests that additional reductions in net petroleum imports could reduce those risks in the future. In addition, the SPR currently holds oil in excess of international obligations. As a member of the IEA, the United States is required to maintain reserves of crude oil or petroleum products equaling at least 90 days of net imports, which it does with a combination of public and private reserves. According to IEA, as of May 2014, the SPR held 106 days of net imports, and private reserves held an additional 141 days of imports for a total of 247 days—well above the 90 days required by the IEA. In light of these factors, some of the stakeholders we interviewed raised questions about whether such a large SPR is needed in the future. For example, one stakeholder indicated that SPR crude oil is surplus and no longer needed to protect the economy. However, other stakeholders highlighted the importance of maintaining the SPR. For example, one stakeholder said that the SPR should be maintained at the current level, and another said that the SPR serves an important “energy insurance” service. DOE officials and one other stakeholder highlighted that, in addition to net imports, there are other factors that may affect the appropriate size of the SPR. Location: According to DOE, changes in how crude oil is transported throughout the United States and in the existing infrastructure surrounding SPR facilities have implications for the location of the SPR. Crude oil in the SPR is stored along the Gulf Coast, where it can take advantage of being in close proximity to a major refining center, as well as distribution points for tankers, barges, and pipelines that can carry crude oil from the SPR to refineries in other regions of the country. Most of the system of crude oil pipelines in the United States was constructed in the 1950s, 1960s, and 1970s to accommodate the needs of the refining sector and demand centers at the time. According to DOE officials, the existing infrastructure was designed primarily to move crude oil from the southern United States to the North. The SPR has historically been able to rely on this distribution system to reach a large portion of the nation’s refining capacity. But, with increases in crude oil production in the Northern U.S. and imports of crude oil from Canada, the distribution system has changed to increase crude oil flows south to the Gulf Coast. Such changes include new pipeline construction and expansions, flow reversals in existing pipelines, and increased utilization of terminals and marine facilities. Such changes may make it more difficult to move crude oil from the SPR to refineries in certain regions of the United States, such as the Midwest, where almost 20 percent of the nation’s refining capacity is located, according to EIA data. Some stakeholders raised questions about the location of the SPR. One stakeholder also suggested that holding SPR crude oil in the western United States may better ensure access to crude oil in the case of a disruption, since the West has no pipeline connectivity to the Gulf Coast. According to DOE, recent changes to crude oil distribution in the United States could have significant implications for the operation and maintenance of the SPR. Composition: In 2006, we reported that the type of crude oil in the SPR was not compatible with all U.S. refineries. We reported that some U.S. refineries processed crude oils heavier than those stored in the SPR. We found that in the event of a disruption in the supply of heavy crude oil, refineries configured to use heavy crude oil would not be able to efficiently refine crude oil from the SPR and would likely reduce production of some petroleum products. As we reported, in such instances, prices for heavy crude oil products could increase, reducing the SPR’s effectiveness to limit economic damage. Refinery officials we spoke with noted that the SPR should contain heavier crude oils that domestic refineries could refine in the event of a supply disruption. Since our 2006 report, domestic production of light sweet crude oil has increased. According to EIA, roughly 96 percent of the 1.8 million-barrel per day growth in production from 2011 to 2013 consisted of light sweet grades with API gravity of 40 or above. As a result, imports of light crude oils have declined, and U.S. reliance on imported heavy crude oil has increased from 37 percent of total imports in 2008 to 50 percent of total imports in 2013, as shown in figure 4. However, DOE officials raised concerns about the prospect of storing additional heavy crude oil in the SPR. According to DOE officials and a 2010 report by DOE, storing heavy crude oil in the SPR would limit the SPR’s ability to respond to nonheavy crude oil disruptions, such as a loss of Middle East medium sour crude oils. In addition, storing more heavy crude would require infrastructure improvements. At the same time, DOE officials also stated that, based on recent conversations with refinery officials, no U.S. refineries would have difficulty using SPR crude oils. Another issue raised by some stakeholders we interviewed is that the SPR holds primarily crude oil, and some stakeholders told us that holding additional consumer fuels could be beneficial. Many recent economic risks associated with supply disruptions have originated from the refining and distribution sectors rather than crude oil supplies. DOE has taken some steps to assess the appropriate location and composition of the SPR in view of changing market conditions, but has not recently re-examined its size. We previously found that federal programs should be re-examined if there have been significant changes in the country or the world that relate to the reason for initiating the program. In that report, we identified a set of reexamination criteria that, when taken together, illustrate the issues that can be addressed through a systematic reexamination process. We found that many federal programs and policies were designed decades ago to respond to trends and challenges that existed at the time of their creation. Given fiscal constraints that we are likely to face for years to come, reexamination may be essential to addressing newly emergent needs without unduly burdening future generations of taxpayers. DOE has taken some steps to reexamine how recent changing market conditions could affect the location and composition of the SPR as follows: In March 2014, DOE conducted a test sale of SPR crude oil to evaluate the SPR’s ability to draw down and distribute SPR crude oil through multiple pipeline and terminal delivery points within one of its distribution systems. DOE officials told us they were reviewing the results of the test sale including data on the movement of crude oil through the system. DOE officials also told us they are working to establish a Northeast Regional Refined Petroleum Product Reserve in New York Harbor and New England to store refined consumer fuels. Although the northeast reserve will not store crude oil, it will be considered part of the SPR and hold 1 million barrels of gasoline at a cost of $200 million. DOE officials told us that they are conducting a regional fuel resiliency study that will provide insights into whether there is a need for additional regional product reserves and, if so, where these reserves should be located and the capacity. We did not assess this effort because the study was ongoing at the time of our review. DOE finalized an assessment in 2010 of the compatibility of crude oil stored in the SPR with the U.S. petroleum refining industry. DOE decided against storing heavy crude oil in the SPR at the time, but committed to revisiting the option of storing heavy crude oil in the future. However, DOE has not recently reexamined the appropriate size of the SPR. DOE last issued a strategic plan for the SPR in May 2004. The plan outlined the mission, goals, and near-term and long-term objectives for the SPR. In 2006, we recommended that the Secretary of Energy reexamine the appropriate size of the SPR. In 2007, while DOE was planning to expand the SPR to its authorized size of 1 billion barrels, the Administration reevaluated the need for an SPR expansion and decided that the current level was adequate. In responding to our recommendation, DOE stated that its reexamination had taken the form of more “actionable items,” including not requesting expansion-funding in its 2011 budget and canceling and redirecting prior year’s expansion funding to general operations of the SPR. Officials from DOE’s Office of Petroleum Reserves told us that the last time they conducted a comprehensive re-examination of the size of the SPR was in 2005. At that time, DOE’s comprehensive study examined the costs and benefits of alternative SPR sizes. Officials told us that they have not conducted a comprehensive reexamination since 2005 because the SPR only recently met the IEA requirement to maintain 90 days of imports. However, the IEA requirement is for total reserves, including those held by the government and private reserves. As shown in figure 5, such reserves in the United States are currently in excess of the nation’s international obligations and, in some scenarios, are expected to be in excess in the future. In July 2014, DOE’s Office of Inspector General recommended that the Office of Fossil Energy perform a long-range strategic review of the SPR to ensure it is best configured to respond to the current and future needs of the United States. DOE concurred with the recommendation. DOE stated that it expected to determine the appropriate course of action by August 2014, and according to DOE, it has initiated a process to conduct such a review. The SPR currently holds oil valued at over $73 billion, and without a current reexamination of the SPR’s size, DOE cannot be assured that the SPR is sized appropriately. The SPR may therefore be at risk of holding excess crude oil. In addition, DOE officials told us that SPR infrastructure is aging and will need to be replaced soon. Conducting a reexamination of the size of the SPR could also help inform DOE’s decisions about how or whether to replace existing infrastructure. If DOE were to assess the appropriate size of the SPR and find that it held excess crude oil, the excess oil could be sold to fund other national priorities. For example, in 1996, SPR crude oil was sold to reduce the federal budget deficit and offset other appropriations. If, for example, DOE found that 90 days of imports was an appropriate size for the SPR, it could sell crude oil worth about $10 billion. Increasing domestic crude oil production, and declines in consumption and crude oil imports have profoundly affected U.S. crude oil markets over the last decade. These changes can have important implications for national energy policies and programs. The SPR is a significant national asset, and it is important for federal agencies tasked with overseeing such assets to examine how, if at all, changing conditions affect their programs. DOE has recently taken several steps to reexamine various aspects of the SPR in light of these changes, including its location and composition; however, DOE’s most recent comprehensive examination of the appropriate size of the SPR was conducted in 2005 when the general expectation was that the country would increasingly rely on foreign crude oil. At about that time, however, it began to become clear that this was not to be the case. Removing export restrictions would be expected to lead to further decreases in net imports that would further affect the role of the SPR. Without a reexamination of the SPR that considers whether a smaller or larger SPR is in the national interest in light of current and expected future changes in market conditions, DOE cannot be assured that the SPR is holding an appropriate amount of crude oil in the SPR, and its ability to make appropriate decisions regarding maintenance of the SPR could be compromised. In view of recent changes in market conditions and in tandem with DOE’s ongoing activities to assess the content, connectivity, and other aspects of the SPR, we recommend that the Secretary of Energy undertake a comprehensive reexamination of the appropriate size of the SPR in light of current and expected future market conditions. We provided a draft of this report to DOE and Commerce for their review and comment. The agencies provided technical comments, which we incorporated as appropriate. In its written comments, reproduced in appendix III, DOE concurred in principle with our recommendation. However, DOE stated that conducting a study of only the size of the SPR would be too narrow in scope and would not address other issues relevant to the SPR carrying out its mission of providing energy security to the United States. DOE stated that a broader, long-range review of the SPR is needed. We agree that such a review would be beneficial. We do not recommend that DOE undertake an isolated reexamination of the size of the SPR, but that such a reexamination be conducted in tandem with DOE’s other activities to assess the SPR and we clarified our recommendation accordingly. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretaries of Energy and Commerce, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. We identified four studies that examined the price and other implications of removing crude oil export restrictions. These four studies are as follows: Resources for the Future (RFF). Crude Behavior: How Lifting the Export Ban Reduces Gasoline Prices in the United States. Washington, D.C.: Resources for the Future, February 2014, revised March 2014. ICF International and EnSys Energy (ICF International). The Impacts of U.S. Crude Oil Exports on Domestic Crude Production, GDP, Employment, Trade, and Consumer Costs. Washington, D.C.: ICF Resources, March 31, 2014. IHS. US Crude Oil Export Decision: Assessing the impact of the export ban and free trade on the US economy. Englewood, Colorado: IHS, 2014. NERA Economic Consulting (NERA). Economic Benefits of Lifting the Crude Oil Export Ban. Washington, D.C.: NERA Economic Consulting, September 9, 2014. Table 3 describes these studies and several key assumptions, Table 4 summarizes their findings regarding prices, and Table 5 summarizes their findings regarding other implications of removing crude oil export restrictions. In addition to the individual named above, Christine Kehr (Assistant Director), Philip Farah, Quindi Franco, Cindy Gilbert, Taylor Kauffman, Celia Rosario Mendive, Alison O’Neill, and Barbara Timmerman made key contributions to this report. Petroleum Refining: Industry’s Outlook Depends on Market Changes and Key Environmental Regulations. GAO-14-249. Washington, D.C.: March 14, 2014. Oil and Gas: Information on Shale Resources, Development, and Environmental and Public Health Risks. GAO-12-732. Washington, D.C.: September 5, 2012. Energy Markets: Estimates of the Effects of Mergers and Market Concentration on Wholesale Gasoline Prices. GAO-09-659. Washington, D.C.: June 12, 2009. Strategic Petroleum Reserve: Issues Regarding the Inclusion of Refined Petroleum Products as Part of the Strategic Petroleum Reserve. GAO-09-695T. Washington, D.C.: May 12, 2009. Energy Markets: Refinery Outages Can Impact Petroleum Product Prices, but No Federal Requirements to Report Outages Exist. GAO-09-87. Washington, D.C.: October 7, 2008. Energy Markets: Increasing Globalization of Petroleum Products Markets, Tightening Refining Demand and Supply Balance, and Other Trends Have Implications for U.S. Energy Supply, Prices, and Price Volatility. GAO-08-14. Washington, D.C.: December 20, 2007. Strategic Petroleum Reserve: Available Oil Can Provide Significant Benefits, but Many Factors Should Influence Future Decisions about Fill, Use, and Expansion. GAO-06-872. Washington, D.C.: August 24, 2006. Motor Fuels: Understanding the Factors That Influence the Retail Price of Gasoline. GAO-05-525SP. Washington, D.C.: May 2, 2005. Alaskan North Slope Oil: Limited Effects of Lifting Export Ban on Oil and Shipping Industries and Consumers. GAO/RCED-99-191. Washington, D.C.: July 1, 1999.
Almost 4 decades ago, in response to the Arab oil embargo and recession it triggered, Congress passed legislation restricting crude oil exports and establishing the SPR to release oil to the market during supply disruptions and protect the U.S. economy from damage. After decades of generally falling U.S. crude oil production, technological advances have contributed to increasing U.S. production. Meanwhile, net crude oil imports—imports minus exports—have declined from a peak of about 60 percent of consumption in 2005 to 30 percent in the first 5 months of 2014. According to Energy Information Administration forecasts, net imports are expected to remain well below 2005 levels into the future. GAO was asked to provide information on the implications of removing crude oil export restrictions. This report examines what is known about (1) price implications of removing crude oil export restrictions; (2) other key potential implications; and (3) implications of recent changes in market conditions on the SPR. GAO reviewed four studies on crude oil exports, including two sponsored by industry, and summarized the literature and views of a nonprobability sample of stakeholders including academic, industry, and other experts. The studies GAO reviewed and stakeholders interviewed suggest that removing crude oil export restrictions is likely to increase domestic crude oil prices but decrease consumer fuel prices. Prices for some U.S. crude oils are lower than international prices—for example, one benchmark U.S. crude oil averaged $101 per barrel in 2014, while a comparable international crude oil averaged $109. Studies estimate that U.S. crude oil prices would increase by about $2 to $8 per barrel—bringing them closer to international prices. At the same time, studies and some stakeholders suggest that U.S. prices for gasoline, diesel, and other consumer fuels follow international prices, so allowing crude oil exports would increase world supplies of crude oil, which is expected to reduce international prices and, subsequently, lower consumer fuel prices. Some stakeholders told GAO that there could be important regional differences in the price implications of removing crude oil export restrictions. Some stakeholders cautioned that estimates of the implications of removing export restrictions are uncertain due to several factors such as the extent of U.S. crude oil production increases, how readily U.S. refiners are able to absorb such increases, and how the global crude oil market responds to increasing U.S. production. The studies GAO reviewed and stakeholders interviewed generally suggest that removing crude oil export restrictions may also have the following implications: Crude oil production. Removing export restrictions would increase domestic production—8 million barrels per day in April 2014—because of increasing domestic crude oil prices. Estimates range from an additional 130,000 to 3.3 million barrels per day on average from 2015 through 2035. Environment. Additional crude oil production may pose risks to the quality and quantity of surface groundwater sources; increase greenhouse gas and other emissions; and increase the risk of spills from crude oil transportation. The economy. Removing export restrictions is expected to increase the size of the economy, with implications for employment, investment, public revenue, and trade. For example, removing restrictions is expected to contribute to further declines in net crude oil imports, reducing the U.S. trade deficit. Changing market conditions have implications for the size, location, and composition of Department of Energy's (DOE) Strategic Petroleum Reserve (SPR). In particular, increased domestic crude oil production and falling net imports may affect the ideal size of the SPR. Removing export restrictions is expected to contribute to additional decreases in net imports in the future. As a member of the International Energy Agency, the United States is required to maintain public and private reserves of at least 90 days of net imports but, as of May 2014, the SPR held reserves of 106 days—worth about $73 billion—and private industry held reserves of 141 days. DOE has taken some steps to assess the implications of changing market conditions on the location and composition of the SPR but has not recently reexamined its size. GAO has found that agencies should reexamine their programs if conditions change. Without such a reexamination, DOE cannot be assured that the SPR is sized appropriately and risks holding excess crude oil that could be sold to fund other national priorities. In view of changing market conditions and in tandem with activities to assess other aspects of the SPR, GAO recommends that the Secretary of Energy reexamine the size of the SPR. In commenting on a draft of this report, DOE concurred with GAO's recommendation.
In September 1993, the National Performance Review called for an overhaul of DOD’s temporary duty (TDY) travel system. In response, DOD created the DOD Task Force to Reengineer Travel to examine the travel process. In January 1995, the task force issued the Report of the Department of Defense Task Force to Reengineer Travel. On December 13, 1995, the Under Secretary of Defense for Acquisition, Technology, and Logistics and the Under Secretary of Defense (Comptroller)/Chief Financial Officer issued a memorandum, “Reengineering Travel Initiative,” establishing the PMO-DTS to acquire travel services that would be used DOD-wide. In a 1997 report to the Congress, the DOD Comptroller pointed out that the existing DOD TDY travel system was never designed to be an integrated system. Furthermore, the report stated that because there was no centralized focus on the department’s travel practices, the travel policies were issued by different offices and the process had become fragmented and “stovepiped.” The report further noted that there was no vehicle in the current structure to overcome these deficiencies, as no one individual within the department had specific responsibility for management control of the TDY travel system. To address these concerns, the department awarded a firm fixed-price, performance-based services contract in May 1998. Under the terms of the contract, the contractor was to start deploying a travel system and to begin providing travel services for approximately 11,000 sites worldwide, within 120 days of the effective date of the contract, completing deployment approximately 38 months later. Our reports and testimonies related to DTS have highlighted various management challenges that have confronted DOD in attempting to make DTS the standard end-to-end travel system for the department. The issues we have reported on include underutilization of DTS, weaknesses in DTS’s requirements management and system testing practices, and the adequacy of the economic analysis. These reported weaknesses are summarized below. DTS underutilization. Our January 2006 and September 2006 reports noted the challenge facing the department in attaining the anticipated DTS utilization. More specifically, as discussed in our September 2006 report, we found that the department did not have reasonable quantitative metrics to measure the extent to which DTS was actually being used. The reported DTS utilization was based on a DTS Voucher Analysis Model that was developed in calendar year 2003 using estimated data, but over the years had not been completely updated with actual data. The DTS Voucher Analysis Model was prepared in calendar year 2003 and based on airline ticket and voucher count data that were reported by the military services and defense agencies, but the data were not verified or validated. Furthermore, PMO-DTS officials acknowledged that the model had not been completely updated with actual data as DTS continued to be implemented at the 11,000 sites. At the time, we found that the Air Force was the only military service that submitted monthly metrics to the PMO-DTS officials for use in updating the DTS Voucher Analysis Model. Rather than reporting utilization based on individual site system utilization data, DOD relied on outdated information in the reporting of DTS utilization to DOD management and the Congress. We have previously reported that best business practices indicate that a key factor of project management and oversight is the ability to effectively monitor and evaluate a project’s actual performance against what was planned. In order to perform this critical task, best business practices require the adoption of quantitative metrics to help measure the effectiveness of a business system implementation and to continually measure and monitor results, such as system utilization. The lack of accurate and pertinent utilization data hindered management’s ability to monitor its progress toward the DOD vision of DTS as the standard travel system as well as to provide consistent and accurate data to Congress. DTS’s reported utilization rates for the period October 2005 through April 2006 averaged 53 percent for Army, 30 percent for Navy, and 39 percent for Air Force. Because the PMO-DTS was unable to identify the total number of travel vouchers that should have been processed through DTS (total universe of travel vouchers), we reported that these utilization rates may have been over- or understated. PMO-DTS program officials confirmed that the reported utilization data were not based on complete data because the department did not have comprehensive information to identify the universe or the total number of travel vouchers that should be processed through DTS. PMO-DTS and DTS military service officials agreed that the actual DTS utilization rate should be calculated by comparing actual vouchers processed in DTS to the total universe of vouchers that should be processed in DTS. The universe would exclude those travel vouchers that could not be processed through DTS, such as those related to permanent change of station travel. The underutilization of DTS also adversely affected the estimated savings. As discussed in our September 2005 testimony there were at least 31 legacy travel systems operating within the department at that time. The testimony recognized that some of the existing travel systems, such as the Integrated Automated Travel System, could not be completely eliminated because the systems performed other functions, such as permanent change of station travel claims that DTS could not process. However, in other cases, the department was spending funds to maintain duplicative systems that performed the same function as DTS. Since these legacy systems were not owned and operated by DTS, the PMO-DTS did not have the authority to discontinue their operation. We have previously stated that this issue must be addressed from a departmentwide perspective. Further, because of the continued operation of the legacy systems at locations where DTS had been fully deployed, DOD components were paying the Defense Finance and Accounting Service (DFAS) higher processing fees for processing manual travel vouchers as opposed to processing the travel vouchers electronically through DTS. According to an April 13, 2005, memorandum from the Assistant Secretary of the Army (Financial Management and Comptroller), DFAS was charging the Army $34 for each travel voucher processed manually and $2.22 for each travel voucher processed electronically—a difference of $31.78. The memorandum noted that for the 5-month period, October 1, 2004, to February 28, 2005, the Army spent about $5.6 million more to process 177,000 travel vouchers manually rather than processing the vouchers electronically using DTS. Requirements management and system testing. Our January 2006 and September 2006 reports noted problems with DTS’s ability to properly display flight information and traced those problems to inadequate requirements management and system testing. As of February 2006, we found that similar problems continued to exist. Once again, these problems could be traced to ineffective requirements management and system testing processes. Properly defined requirements are a key element in systems that meet their cost, schedule, and performance goals since the requirements define the (1) functionality that is expected to be provided by the system and (2) quantitative measures by which to determine through testing whether that functionality is operating as expected. Requirements represent the blueprint that system developers and program managers use to design, develop, and acquire a system. Requirements represent the foundation on which the system should be developed and implemented. As we have noted in previous reports, because requirements provide the foundation for system testing, they must be complete, clear, and well documented to design and implement an effective testing program. Absent this, an organization is taking a significant risk that its testing efforts will not detect significant defects until after the system is placed into production. We reported in September 2006 that our analysis of selected flight information disclosed that DOD did not have reasonable assurance that DTS displayed flights in accordance with its stated requirements. We analyzed 15 domestic GSA city pairs, which should have translated into 246 GSA city pair flights for the departure times selected. However, we identified 87 flights that did not appear on one or more of the required listings based on the DTS requirements. After briefing PMO-DTS officials on the results of our analysis in February 2006, the PMO-DTS employed the services of a contractor to review DTS to determine the specific cause of the problems and recommend solutions. In a March 2006 briefing, the PMO-DTS acknowledged the existence of the problems and identified two primary causes. First, part of the problem was attributed to the methodology used by DTS to obtain flights from the Global Distribution System (GDS). The PMO-DTS stated that DTS was programmed to obtain a “limited” amount of data from GDS in order to reduce the costs associated with accessing GDS. This helps to explain why flight queries we reviewed did not produce the expected results. To resolve this particular problem, the PMO-DTS proposed increasing the amount of data obtained from GDS. Second, the PMO-DTS acknowledged that the system testing performed by the contractor responsible for developing and operating DTS was inadequate, and therefore, there was no assurance that DTS would provide the data in conformance with the stated requirements. This weakness was not new, but rather reconfirmed the concerns discussed in our September 2005 testimony and January 2006 report related to the testing of DTS. Validity of economic analysis. As noted in our September 2006 report, our analysis of the September 2003 economic analysis found that two key assumptions used to estimate cost savings were not based on reliable information. Consequently, the economic analysis did not serve to help ensure that the funds invested in DTS were used in an efficient and effective manner. Two primary areas—personnel savings of $24.2 million and reduced commercial travel office fees of $31 million—represented the majority of the over $56 million of estimated annual net savings DTS was expected to realize. However, the estimates used to generate these savings were unreliable. The personnel savings of $24.2 million was attributable to the Air Force and Navy. The assumption behind the personnel savings computation was that there would be less manual intervention in the processing of travel vouchers for payment, and therefore, fewer staff would be needed. However, based on our discussions with Air Force and Navy DTS program officials, it was questionable how the estimated savings would be achieved. Air Force and Navy DTS program officials stated that they did not anticipate a reduction in the number of personnel with the full implementation of DTS, but rather shifting staff to other functions. According to DOD officials responsible for reviewing economic analyses, while shifting personnel to other functions was considered a benefit, it should have been considered an intangible benefit rather than tangible dollar savings since the shifting of personnel did not result in a reduction of DOD expenditures. Also, as part of the Navy’s overall evaluation of the economic analysis, program officials stated that “the Navy has not identified, and conceivably will not recommend, any personnel billets for reduction.” Finally, the Naval Cost Analysis Division’s October 2003 report on the economic analysis noted that it could not validate approximately 40 percent of the Navy’s total costs, including personnel costs, in the DTS life-cycle cost estimates because credible supporting documentation was lacking. The report also noted that the PMO-DTS used unsound methodologies in preparing the DTS economic analysis. We also reported in 2006 that according to DOD’s September 2003 economic analysis, it expected to realize annual net savings of $31 million through reduced fees paid to the commercial travel offices because the successful implementation of DTS would enable the majority of airline tickets to be acquired with either no or minimal intervention by the commercial travel offices. These are commonly referred to as “no touch” transactions. However, DOD did not have a sufficient basis to estimate the number of transactions that would be considered “no touch” since the (1) estimated percentage of transactions that can be processed using “no touch” was not supported and (2) analysis did not properly consider the effects of components that use management fees, rather than transaction fees, to compensate the commercial travel offices for services provided. The weaknesses we identified with the estimating process raised serious questions as to whether DOD would realize substantial portions of the estimated annual net savings of $31 million. DOD arrived at the $31 million of annual savings in commercial travel office fees by estimating that 70 percent of all DTS airline tickets would be considered “no touch” and then multiplying these tickets by the savings per ticket in commercial travel office fees. However, we found that the 70 percent assumption was not well supported. We requested, but the PMO-DTS could not provide, an analysis of travel data supporting its assertion. Rather, the sole support provided by the PMO-DTS was an article in a travel industry trade publication. The article was not based on information related to DTS, but rather on the experience of one private-sector company. As noted in our January 2006 report, opportunities existed at that time to better achieve the vision of a travel system that reduces the administrative burden and cost while supporting DOD’s mission. Some of the suggested proposals are highlighted below. Automating approval of changes to authorized travel expenses. The business process used at the time by DTS designated the traveler’s supervisor as the authorizing official responsible for authorizing travel and approving the travel voucher and making sure the charges are appropriate after the travel is complete. Furthermore, should the actual expenses claimed on the travel voucher differ from the authorized estimate of expenses, the authorizing official was required to approve these deviations as well. For example, if the estimated costs associated with the travel authorization are $500 and the actual expenses are $495, then the authorizing official was required to approve the $5 difference. If the difference was caused by two different items, then each item required approval. Similarly, if the actual expenses are $505, then the authorizing official was required to specifically approve this $5 increase. This policy appeared to perpetuate one of the problems noted in the 1995 DOD report—compliance with rigid rules rather than focusing on the performance of the mission. One practice that could be used to reduce the administrative burden on the traveler and the authorizing official was to automatically make the adjustments to the travel claim when the adjustments do not introduce any risk or the cost of the internal control outweighs the risk. For example, processing a travel claim that was less than the amount authorized does not pose any more risk than processing a travel claim that equals the authorized amount since the key was whether the claim is valid rather than whether the amount equals the funding initially authorized and obligated in the financial management system. Using commercial databases to identify unused airline tickets. We have previously reported that DOD had not recovered millions of dollars in unused airline tickets. One action that DOD was taking to address the problem was requiring the commercial travel offices to prepare reports on unused airline tickets. While this action was a positive step forward, it required (1) the commercial travel offices to have an effective system of performing this function and (2) DOD to have an effective program for monitoring compliance. At the time, we suggested that a third party service, commonly referred to as the Airlines Reporting Corporation, might provide DOD with the necessary information to collect unused airline tickets in an automated manner. If the information from the Airlines Reporting Corporation was utilized, DOD would not have to rely on the reports prepared by the commercial travel offices and would have been able to avoid the costs associated with preparing the unused airline ticket reports. According to DOD officials, at the time of our review, this requirement had not yet been implemented in all the existing commercial travel office contracts, and therefore, the total costs of preparing the unused airline ticket reports were unknown. Utilizing restricted airfares where cost effective. DOD’s business rules and the design of DTS provided that only unrestricted airfares should be displayed. However, adopting a “one size fits all” policy did not provide an incentive to the traveler to make the best decision for the government, which was one of the stated changes documented in the 1995 DOD report. Other airfares, generally referred to as restricted airfares, may be less expensive than a given GSA city pair fare and other unrestricted airfares. However, as the name implies, these fares come with restrictions. For example, within the GSA city pair fare program, changes can be made in the flight numerous times without any additional cost to the government. Generally, with restricted airfares there was a fee for changing flights. The Federal Travel Regulation and DOD’s Joint Travel Regulations allow travelers to take restricted airfares, including on those airlines not under the GSA city pair contract, if the restricted airfare costs less to the government. Adopting a standard policy of using one type of airfare—unrestricted or restricted—is not the most appropriate approach for DOD to follow. A better approach would have been to establish guidance on when unrestricted and restricted airfares should be used and then monitor how that policy was implemented. Although development of the guidance is an important first step, we previously stated that management also needs to determine (1) whether the policy was being followed and (2) what changes are needed to make it more effective. In our two reports we made 14 recommendations to help improve the department’s management and oversight of DTS and streamline DOD’s administrative travel processes. In commenting on our reports, the department generally agreed with the recommendations and described its efforts to address them. The implementation of our recommendations will be an important factor in DTS’s achieving its intended goals. We will be following up to determine whether and if so, to what extent, DOD has taken action to address our recommendations in accordance with our standard audit follow-up policies and procedures. We would be pleased to brief the Subcommittee on the status of the department’s actions once we have completed our follow-up efforts. Mr. Chairman, this concludes my prepared statement. We would be happy to answer any questions that you or other members of the Subcommittee may have at this time. For further information about this testimony, please contact McCoy Williams at (202) 512-2600 or williamsm1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. In addition to the above contacts, the following individuals made key contributions to this testimony: Darby Smith, Assistant Director; Evelyn Logue, Assistant Director; J. Christopher Martin, Senior-Level Technologist; F. Abe Dymond, Assistant General Counsel; Beatrice Alff; Francine DelVecchio; and Tory Wudtke. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 1995, the Department of Defense (DOD) began an effort to implement a standard departmentwide travel system, the Defense Travel System (DTS). This testimony is based on previously issued GAO reports and testimonies that highlighted challenges confronted by DOD in the implementation of DTS. More specifically, today's testimony focuses on prior GAO reporting concerning (1) the lack of quantitative metrics to measure the extent to which DTS is actually being used, (2) weaknesses with DTS's requirements management and system testing, and (3) two key assumptions related to the estimated cost savings in the September 2003 DTS economic analysis were not reasonable. Today's testimony also highlights some actions that DOD could explore to help streamline its administrative travel processes such as using a commercial database to identify unused airline tickets. Overhauling the department's antiquated travel management practices and systems has been a daunting challenge for DOD. In several prior reports and testimonies, GAO identified several key implementation issues regarding DOD's ability to make DTS the standard travel system for the department. Specifically, GAO reported that DTS was not being used to the fullest extent possible, and DOD lacked comprehensive data to effectively monitor its utilization. At the time of GAO's 2006 review, DOD's utilization data were based on a model that was developed in calendar year 2003. However, the model had not been completely updated to reflect actual DTS usage at that time. The lack of up-to-date utilization data hindered management's ability to monitor progress toward the DOD vision of DTS as the standard travel system. Additionally, the continued use of the department's legacy travel systems resulted in the underutilization of DTS and adversely affected the expected savings that DTS could achieve. Furthermore, GAO previously reported weaknesses in DTS's requirements management and system testing practices. GAO found that DTS's requirements were still inadequate. GAO noted that until DOD improves DTS's requirements management practices, the department will not have reasonable assurance that DTS can provide the intended functionality. Additionally, GAO's 2006 report of the September 2003 DTS economic analysis found that the two key assumptions used to estimate annual net savings were not based on reliable information. Two cost components represented the majority of the over $56 million in estimated net savings--personnel savings and reduced commercial travel office fees. GAO's analysis found that $24.2 million in personnel savings related to the Air Force and the Navy were not supported. Air Force and Navy DTS program officials stated that they did not anticipate a reduction in the number of personnel, but rather the shifting of staff from the travel function to other functions. The Naval Cost Analysis Division stated that the Navy will not realize any tangible personnel cost savings from the implementation of DTS. In regard to the commercial travel office fees, GAO's 2006 reporting disclosed that the economic analysis assumed that 70 percent of all DTS airline tickets would either require no intervention or minimal intervention from the commercial travel offices resulting in an estimated annual net savings of $31 million. However, the support provided by the DTS program office was an article in a trade industry publication. The article was not based on information related to DTS, but rather on the experience of one private-sector company. In addition, GAO identified concepts that the department can adopt to streamline its travel management practices.
According to a 1995 World Health Organization (WHO) report, the three major threats to the survival of children under age 5 in developing countries are diarrheal dehydration, acute respiratory infections (e.g., pneumonia), and vaccine-preventable diseases. WHO’s 1995 report stated that 13.3 million children under age 5 died in developing countries in 1985 and that 12.2 million children under age 5 died in 1993. Figure 1 shows the causes of death for children under age 5 in developing countries, and figure 2 shows 1994 mortality rates for children under age 5 worldwide. Since 1954, USAID and its predecessor agencies have been involved in activities to improve child survival in the developing countries. Since the passage of Public Law 480 in 1954, U.S. food assistance has been provided to children and pregnant and lactating women. In the 1960s, USAID began building health clinics and funding research on treatments for diarrheal disease and the prevention of malaria. One of the specific objectives of the Foreign Assistance Act of 1961, the primary legislation governing U.S. foreign aid, was to reduce infant mortality. In the 1970s, USAID began to focus on providing appropriate health interventions for common health problems in communities with the greatest needs. Activities related to child health included field studies on oral rehydration and vitamin A therapy and malaria research. “In carrying out the purposes of this subsection, the President shall promote, encourage, and undertake activities designed to deal directly with the special health needs of children and mothers. Such activities should utilize simple, available technologies which can significantly reduce childhood mortality, such as improved and expanded immunization programs, oral rehydration to combat diarrhoeal diseases, and education programs aimed at improving nutrition and sanitation and at promoting child spacing.” Because the statutory language is broad and emphasizes but does not limit USAID to the specified interventions, USAID has considerable latitude in developing child survival activities appropriate to the community being served. In February 1985, in response to the authorizing legislation, some of USAID’s ongoing child health efforts were consolidated into a child survival program. USAID provided mission-level child survival assistance to 31 countries in 1985, but it placed special emphasis on 22 countries that had especially high mortality rates. For each of these 22 countries, USAID developed a detailed child survival strategy, in cooperation with the host government, to deal with the country’s specific needs and circumstances. USAID’s policy was to sustain bilateral child survival funding in these countries for at least 3 to 5 years and provide technical support and training on a priority basis. Over the years, the congressional appropriations committees have continued to emphasize the importance of the basic interventions mentioned in the authorizing statute, particularly immunizations and oral rehydration therapy. In some years, the committees have also directed USAID to support particular activities, including the promotion of breastfeeding, research and development of vaccines, and prevention of vitamin A and other micronutrient deficiencies through food fortification, tablets, and injections. USAID’s child survival program has evolved in the 1990s to where it no longer is a separate program, but is encompassed within USAID’s sustainable development strategy as a component of its population, nutrition, and health sector. (See app. I for a more detailed description of USAID’s current child survival objectives and approach.) Between fiscal years 1985 and 1995, USAID reported that it obligated over $2.3 billion for the child survival program. Child survival projects and other activities attributed to child survival may be funded through USAID’s overseas missions directly or through its four regional bureaus or its central bureaus (see table 1). The number of countries receiving mission-level child survival assistance in a single fiscal year increased from 31 in 1985 to about 43 in 1995. During this 11-year period, USAID provided mission-level assistance on a continuing basis for some countries, such as Egypt, whereas other countries received funding in only 1 year. A total of 83 developing countries received some mission-level child survival funding during this period. The amounts ranged from $9,000 for Oman to $137 million for Egypt. As shown in table 2, of the 10 countries that have received the most child survival assistance from USAID missions, 5 were in the Latin America and Caribbean region, 4 were in the Asia and Near East region, and 1 was in the Africa region. USAID provides funding to other organizations to implement health and population services. USAID guidance states that U.S. assistance must help build the capacity to develop and sustain host country political commitment to health and population programs, as well as enhance the ability of local organizations to define policies and design and manage their own programs. USAID’s policy is to involve both the public and private sectors and give special attention to building, supporting, and empowering nongovernmental organizations (NGO) wherever feasible. USAID-supported child survival activities involve U.S. and foreign not-for-profit NGOs, including private voluntary organizations (PVO); universities; for-profit contractors; multilateral organizations; and U.S. and foreign government agencies. Figure 3 shows that U.S. NGOs received about 45 percent of fiscal year 1994 child survival funding. At least 35 U.S. PVOs and 22 other U.S. NGOs participated in USAID’s child survival programs during that year as primary grantees. For-profit businesses and host country governments together accounted for another one-quarter of the funding. The remainder went to multilateral organizations, such as UNICEF; U.S. government agencies, including the Centers for Disease Control and Prevention; and indigenous NGOs. For-profit firms ($35.1 million) Host country government agencies ($32.7 million) USAID generally uses the different types of organizations for different purposes or for implementing different types of activities. No one single group or organization typically performs the full range of activities that the agency sponsors. For example, in all the countries we visited, PVOs were involved at the community level with direct delivery of some of the basic health interventions. In Guatemala, a for-profit contractor provided technical assistance for the computer hardware and software programs that USAID installed in the Ministry of Health to computerize its health data. Between 1985 and 1995, activities related to the three major causes of death among young children—acute respiratory infections, diarrheal diseases, and vaccine-preventable diseases—received about $972 million, or 41 percent of the child survival funds. Table 3 shows funding levels attributed to child survival by type of activity from 1985 to 1995. USAID is unable to determine with any degree of precision how much funding is actually being used for child survival activities because (1) of the way Congress has directed funding; (2) USAID guidance allows considerable flexibility and variation in attributing child survival funds; (3) the amounts reported are based on estimated percentages of projected budgets, which sometimes are not adjusted at the end of the year to reflect any changes that may have occurred; and (4) the amounts reported are not directly based on specific project expenditures. USAID plans a new information management system that may improve the precision of the data for its child survival activities. From fiscal year 1985, when the child survival program officially began, through fiscal year 1995, appropriations statutes have mandated spending of at least $1.8 billion for child survival activities. From fiscal years 1985 to 1991, funds appropriated by Congress for child survival went into a separate functional account under USAID’s development assistance account. Additionally, for several years prior to fiscal year 1992, the appropriations laws not only earmarked money for child survival, but the appropriations committees’ reports also expressed the intention that other accounts within the development assistance account should provide substantially more money for child survival activities. Beginning in fiscal year 1992, the functional account was eliminated and subsequent laws appropriating moneys to USAID contained an earmark for child survival activities that could be drawn from any USAID assistance account. Since 1991, Congress has substantially increased the level of funds designated for child survival through earmarks (from $100 million in direct appropriations in fiscal year 1991 to $250 million in fiscal year 1992). USAID issued guidance in 1992 and 1996 about the types of activities that were allowed to be attributed to child survival. Additionally, the agency’s budget office issues annual instructions for reporting on project activities. These instructions name types of activities that may be attributed to child survival and give broad discretion to USAID officials to determine the percentage of funding that can be reported as child survival. However, the instructions do not provide specific indicators for determining attribution, such as the percent of children in the population served for water projects. Moreover, some mission officials responsible for recording project activities told us that the guidance for making attributions was not clear to them. In our discussions with USAID officials, we found that the process of attributing funds to child survival activities was imprecise and that mistakes occurred. As a result, the percentage of funds designated as child survival varied widely for similar activities. For example, USAID used child survival funds for the construction of water systems in all four countries we visited. USAID guidance suggested 30 percent of the total budget of water and sewerage projects as an appropriate level to attribute to child survival, but child survival funds comprised from 3 to 100 percent of the funding for some of these projects. According to an official at the USAID mission in Egypt, the mission has a policy of attributing 3 percent of sewerage projects and 6 percent of water projects to child survival. In contrast, the Health Sector II project in Honduras attributed 70 percent of the $16.9 million water and sanitation component to child survival. According to a mission official, the justification for this level of attribution was that children under age 5 comprised approximately 70 percent of the deaths due to water-borne diseases in rural areas. Another activity funded by this project was the construction of area warehouses. About $72,000, which was 26 percent of the cost, was attributed to child survival. The justification USAID provided for this attribution was that these warehouses, which were used to store medical supplies, have contributed to the decline in the infant mortality rate in Honduras. The funding amounts reported as child survival are based on estimated percentages of total project obligations for types of child survival activities carried out under individual projects. These estimates are made by project or budget officers and are supposed to be based on a knowledge of project plans and activities. However, mission officials told us that they generally did not change the activity assignments or percentages, even though changes in available funding or project plans may occur during the year. For example, $800,000 in child survival funding was attributed to a basic education project in Ethiopia in 1994. A mission official told us that the child survival activity did not actually take place, but the reports provided to us by USAID included child survival funding for this project. USAID reports on funds attributed to child survival and other activities are not based on expenditures. USAID stated that its activity reporting system was never intended to track expenditures for programs and that Congress was aware that reported funding represented estimates of obligations. However, according to USAID officials, a new information system is underway that will link budgets, obligations, and expenditures and enable the agency to track funds more accurately. USAID officials said that the new system would be able to link some child survival assistance with actual expenditures in cases in which a distinct child survival activity has been defined. However, in other cases, reported funding will continue to be based on the project manager’s estimate of the percentage of funding attributable to child survival. USAID began implementing the new system in July 1996 for all new commitments made at headquarters, and it plans to extend the system to the overseas missions by October 1996. USAID has made significant contributions, in collaboration with other donors, in reducing under-5 mortality rates. Among the 10 countries receiving the most USAID mission-level child survival assistance, all but one improved their under-5 mortality rate between 1980 and 1994. Five countries achieved the World Summit goal of 70 or fewer deaths per 1,000 live births. The number of deaths from the three major causes of under-5 mortality declined during this time, but the largest decrease was for vaccine-preventable diseases. USAID can claim some far-reaching accomplishments in immunizations. Between 1985 and 1994, 26 of the 59 countries that received some mission-level assistance specifically for immunization activities achieved USAID’s goal of 80-percent immunization rates. Through collaboration with the Pan American Health Organization (PAHO), UNICEF, Rotary International, other international organizations, and the individual countries, USAID helped to bring about the eradication of poliomyelitis in the Americas. USAID’s Children’s Vaccine Initiative project supports a revolving fund, called the Vaccine Independence Initiative, that is managed by PAHO and UNICEF. This fund, which received $3.8 million of child survival funding between 1992 and 1995, is used to help developing countries purchase vaccines. One of USAID’s most important accomplishments in diarrheal disease control occurred before 1985 with the discovery that oral rehydration salts could be used to treat the dehydration that occurs with diarrheal diseases and causes death. USAID has also had positive results in efforts to increase usage of oral rehydration therapy, although only four countries where USAID has provided mission-level child survival assistance have usage rates above 80 percent. USAID’s recent diarrheal disease control efforts have been aimed at promoting sustainability by transferring technology to developing countries so that they can manufacture the salts. USAID has also contributed to research on the importance of vitamin A supplementation and efforts to incorporate vitamin A into local food supplies around the world. USAID’s Center for Development Information and Evaluation (CDIE) concluded in a 1993 report that USAID’s child survival activities had achieved many successes and made a significant contribution in expanding child survival services and reducing infant mortality in many countries. The CDIE report cited the importance of USAID’s role in vaccinations and stated that the agency had supported other major donors, such as UNICEF, through coordination and the provision of needed resources. Another evaluation conducted independently by RESULTS Educational Fund and the Bread for the World Institute concluded in a January 1995 report that USAID’s child survival activities had made an important contribution to reducing deaths among children under age 5 in countries receiving USAID assistance. In the four countries we visited, USAID’s contributions through child survival activities were evident. For example, in Mozambique, USAID supports PVOs that provide child survival services and other types of humanitarian and development assistance. We visited several sites where World Vision Relief and Development was implementing a child survival project. Among the activities we observed were vaccinations for children under age 3, monitoring of children’s growth, prenatal examinations, and the construction of latrines. In Bolivia, PROSALUD health clinics we visited offered general medical services; childbirth and pediatric care; immunizations; family planning; and dental, pharmacy, and laboratory services. PROSALUD is a Bolivian private, nonprofit organization initiated and operated with USAID child survival funds. Between 1991 and 1996, USAID provided the PROSALUD project with $6.5 million, of which $6.2 million, or 95 percent, was attributed to child survival. The 26 PROSALUD clinics and its hospital charge small user fees that enable the organization to partially self-finance its operations. We also visited Andean Rural Health Care, a U.S. PVO that provides community health care in Bolivia through clinics and volunteers. The volunteers are trained at the health centers on how to make home visits to (1) provide families with oral rehydration salts, (2) treat diarrheal diseases and acute respiratory infections, (3) promote vaccinations by health center staff, and (4) monitor the growth and health of family members (see fig. 4). In Guatemala, we visited a clinic operated by APROFAM, which is a private, nonprofit organization that provides family planning services as well as selected maternal-child health services, such as pre- and postnatal care, child growth monitoring, and oral rehydration therapy. Under the current USAID grant, APROFAM received about $2.5 million in child survival funding, representing 15 percent of its total USAID funds. We also visited a pharmaceutical plant in Guatemala where USAID provided equipment and technical assistance to manufacture packets of oral rehydration salts used in the treatment of diarrheal disease dehydration (see fig. 5). The packets are to be distributed through Ministry of Health facilities. This plant was a component of USAID’s $20 million child survival project started in 1985 to assist the Ministry of Health. In Egypt, we visited urban and rural health clinics that administered vaccinations and oral rehydration therapy and had laboratories that were equipped to perform medical tests. According to USAID officials, these health clinics also provided treatment for acute respiratory infections and family planning activities. For fiscal years 1993-95, USAID reportedly spent about $478.9 million, or 58 percent of child survival funding, on interventions that directly address the causes of death of children under the age of 5—immunizations, diarrheal disease control, nutrition, and acute respiratory infections. However, the amounts used for immunizations and diarrheal disease control were less in 1994 and 1995 than they had been in 1993. During the same period, USAID spent about $341.5 million on such areas as health systems development, health care financing, water quality, and environmental health (a new area). In Mozambique, USAID attributed child survival funds for the construction of a water supply system in Chimoio by the Adventist Development and Relief Agency to serve as many as 25,000 residents (see fig. 6). About $2.5 million, or 40 percent, of the project’s almost $6.2 million cost was attributed to child survival. USAID described this project as an exception where such infrastructure activities would be appropriately attributed to child survival. Since 1992, the USAID mission in Egypt has designated as child survival about $6.5 million for water and wastewater infrastructure development.Egypt’s sewerage projects include the design, construction, and operation of wastewater treatment plants and systems, and water projects include the construction of water treatment plants, which provide potable water to urban areas. The 1993 USAID/CDIE report recommended that water infrastructure projects not be funded as child survival because child survival resources were not considered adequate to construct enough water systems to have a measurable impact on national health indicators. The report also stated that the results of other child survival interventions appear to be greater than the results obtained from investing in water and sanitation and that oral rehydration therapy and interventions related to acute respiratory infections should be given higher priority. In Mozambique, reconstruction of a railroad bridge crossing the Zambezi River between Sena and Mutarara was considered child survival (see fig. 7). The goal of this project was to rehabilitate roads so that land movement of food and other relief assistance, the return of displaced persons and refugees, and drought recovery activities could occur. The railroad bridge was modified to accommodate vehicles and pedestrian traffic. Of the project’s $10.8 million budget, $1.9 million was attributed to child survival as nutrition in 1993 and 1994. Although the railroad bridge in Mozambique was considered a nutrition intervention, other infrastructure projects that have used child survival funding were classified as water quality/health, health systems development, and health care financing. Between 1993 and 1995, USAID attributed about $38.6 million in child survival funds to water quality/health, $113.5 million to health systems development, and $24.6 million to health care financing. Examples of activities related to health systems development include the construction of warehouses for government medical supplies in Honduras. An example of a health care financing activity in Bolivia is PROSALUD, which USAID established to be a self-financing health care provider. USAID attributed $30 million of the international disaster assistance funds to child survival in fiscal year 1995. The projects that USAID’s budget office counted as child survival included activities that benefited children, such as health and winterization activities in the former Yugoslavia, a water drilling program in northern Iraq, an emergency medical and nutrition project for displaced persons in Sudan, the purchase of four water purification/chlorination systems in Djibouti, and community health care in two regions of Somalia. Additionally, the conference report accompanying the fiscal year 1996 foreign operations appropriations act authorized USAID to attribute $30 million of disaster assistance funding to child survival. USAID’s guidance states that child survival assistance will be provided to countries with mortality rates for children under age 5 at or above 150 per 1,000 live births. However, USAID does not provide assistance to some of the 30 countries with the most serious under-5 mortality problems. For example, many countries in sub-Saharan Africa, which have the most serious child survival problems, do not receive USAID child survival assistance for mission-level projects. According to USAID, the agency does not have a mission in these countries, had closed out assistance, or was in the process of closing out assistance because of budgetary or legal reasons or because sustainable development programs were not considered feasible. (See app. II for details regarding under-5 mortality rates and amounts of USAID mission-level assistance for developing countries.) On the other hand, USAID attributes mission-level child survival funds to activities in 17 countries that have a mortality rate of 70 or fewer deaths per 1,000 live births. In fiscal year 1995, USAID used about $89.5 million of child survival funding for activities in these 17 countries. Among these countries were several in the former Soviet Union, including Georgia, which had an under-5 mortality rate of 27 per 1,000 live births. By contrast, in fiscal year 1995, USAID used $53.4 million of child survival funding in 15 of the 30 countries that had the most serious problems with under-5 mortality—rates above 150 per 1,000 live births. In 1995, Egypt continued to have the largest share of mission-level assistance attributed to child survival ($27 million), as it has over the last decade. UNICEF reported Egypt’s under-5 mortality rate in 1994 as 52; however, USAID indicated that its most recent data showed that the rate was 80.6. In commenting on a draft of this report, USAID indicated that it focused its child survival efforts in countries with high rates of under-5 mortality and other factors that indicated a great need for assistance. USAID stated that (1) national mortality rates are averages that often mask pockets of high child mortality, (2) the achievement of a target mortality rate is not a reason to stop support of efforts because gains need to be sustainable, and (3) child survival programs are not in some of the most needy countries because of legal, budgetary, and sustainability reasons. USAID issued new guidance in April 1996 that indicates that infrastructure is not generally considered to be an appropriate use of child survival funds. USAID stated that the infrastructure cases we cited, all of which began before April 1996, were isolated examples. USAID further stated that the bridge rehabilitation and water works construction projects in Mozambique were needed to reduce child mortality after the civil turmoil in that country. USAID also commented that its current financial reporting system was never intended to be used to track any program area on an expenditure basis. USAID indicated that a new information management system that is being implemented has been designed to track funding for each activity by linking budgets, obligations, procurements, and expenditures. After reviewing USAID’s comments, we have deleted the recommendations that we presented in our draft report. In its comments and subsequent discussions, USAID provided us with sufficiently detailed information to adequately explain the reasons why some countries with very severe child mortality problems do not receive direct U.S. aid and others with lower mortality rates do. USAID’s new operating procedures have the potential to address, for the most part, how its child survival activities will be linked to USAID’s objectives and how its project activities will be measured. Our concern that USAID’s new information management system provide accurate obligation and expenditure data is being addressed by USAID. We are still concerned, however, about the clarity of the guidance provided to USAID’s activity managers for determining the percentage of funding and expenditures attributable to child survival when a broader activity contributes to USAID’s child survival objectives. We are, however, making no specific recommendations in these areas. USAID also provided clarifications and corrections to the draft, and we have incorporated these changes where appropriate. USAID’s comments are in appendix III. To understand the extent, nature, and progress of USAID’s child survival activities, we reviewed the authorizing and appropriations legislation for 1985-95 and the accompanying committee reports and selected USAID project documents, including planning and program implementation documents, internal and external project evaluations, funding reports, health activity reports, and project files. We also held extensive discussions with officials from USAID, WHO, PAHO, UNICEF, USAID contractors, PVOs, and host governments and program beneficiaries. We visited USAID missions in Bolivia, Egypt, Guatemala, and Mozambique to directly observe the nature of USAID’s child survival activities being implemented in the field. We selected these countries because they received significant child survival funding, had various types of child survival projects, and provided regional differences. During our fieldwork, we analyzed data for most of the USAID missions’ ongoing projects and visited 63 project sites. In addition to the fieldwork, we also talked with USAID project officers in two other countries. We analyzed USAID strategic objectives, program goals, and funding documentation to determine the linkage between funds attributed to child survival and USAID’s child survival objectives. We analyzed the most recent data on USAID funding attributed to child survival for 1985-95, which we obtained from the contractor that operates USAID’s Center for International Health Information. At the time of our review, obligation data for fiscal year 1995 were not fully validated; therefore, some of the fiscal year 1995 obligation data are subject to change. According to USAID officials, the 1995 data had to be recoded, and the process was not completed by August 1996. We conducted our review between May 1995 and August 1996 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no distribution of this report until 30 days after the date of this letter. We will then send copies of this report to the Administrator, USAID; the Director, Office of Management and Budget; the Secretary of State; and other interested congressional committees. We will also make copies available to others on request. Please contact me at (202) 512-4128 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix IV. In February 1985, in response to the legislation authorizing child survival activities, the U.S. Agency for International Development (USAID) established the child survival program to consolidate some of the agency’s ongoing efforts related to reducing deaths among children in developing countries. Although USAID provided mission-level child survival assistance to 31 countries in 1985, it placed special emphasis on 22 countries that had especially high mortality rates. The child survival program for these 22 countries was originally guided by USAID’s Child Survival Task Force. This task force helped to develop a detailed child survival strategy for each country, in cooperation with the host government, to deal with the country’s specific needs and circumstances. USAID’s policy was to sustain mission-level child survival funding in these countries for at least 3 to 5 years and provide technical support and training on a priority basis. Special attention was also to be given to program monitoring and evaluation and coordination with private voluntary organizations (PVO), international organizations, and other U.S. agencies. From 1985 to 1991, child survival appropriations went into a functional account for child survival set up under an overall development assistance account. Beginning in fiscal year 1992, Congress designated a specific amount for child survival, which could be drawn from any USAID appropriation. In the 1990s, child survival was incorporated into USAID’s broad strategy for development assistance. According to the agency’s 1995 Guidelines for Strategic Plans, USAID’s current emphasis is on sustainable and participatory development, partnerships, and the use of integrated approaches. The agency’s five goals are to encourage broad-based economic growth, build democracy, stabilize world population and protect human health, protect the environment, and provide humanitarian assistance. USAID’s population, health, and nutrition sector has priority objectives in four areas: family planning, child survival, maternal health, and reducing sexually transmitted diseases and human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS). Agency guidelines indicate that the core of the sector is family planning but that balanced strategies are encouraged. USAID’s guidance on child survival states that activities are to focus on the principal causes of death and severe lifelong disabilities, and programmatic emphasis should be on children under the age of 3. Further, the guidance states that child survival service delivery is to be focused on the community; the primary health care system; and, to a limited extent, the first-level hospitals. Emphasis is to be on enabling caretakers to take effective action on behalf of their children’s well-being and ensuring gender equity in children’s access to preventive and curative health. Although USAID considers health and population services to be important, the agency does not provide them directly; instead, it tries to improve the capacity, infrastructure, systems, and policies that support these services in a sustainable way. In its 1994 Strategies for Sustainable Development, USAID stated that the agency’s population and health programs would concentrate on countries that contribute the most to global population and health problems and have population and health conditions that impede sustainable development. Agency guidance states that any of the following key factors indicate the need to consider developing strategic objectives that address family planning, child survival, maternal health, and reduction of sexually transmitted diseases and HIV/AIDS: annual total gross domestic product growth less than 2 percent higher than annual population growth over the past 10 years, unmet need for contraception at or above 25 percent of married women of childbearing age, total fertility rate above 3.5 children per woman, mortality rate for children under age 5 at or above 150 per 1,000 live births, stunting in at least 25 percent of children under age 5, maternal mortality rate at or above 200 deaths per 100,000 live births, and prevalence of sexually transmitted diseases at or above 10 percent among women aged 15 to 30. Because USAID has identified global population growth as an issue of strategic priority agencywide, guidance states that strategies directed at family planning, child survival, maternal health, and reduction of sexually transmitted diseases and HIV/AIDS—all of which must be considered together—will receive particular attention in those countries where the unmet need for contraception is the greatest. USAID stated that other concerns would also include under-5 mortality, maternal mortality, prevalence of sexually transmitted diseases, and stunting. USAID’s long-term goal is to contribute to a cooperative global effort to stabilize world population growth and protect human health. Its anticipated near-term results over a 10-year period are (1) significant improvement in women’s health, (2) a reduction in child mortality by one-third, (3) a reduction of maternal mortality rates by one-half, and (4) a decrease in the rate of new HIV infections. USAID issued guidance in 1992 and 1996 about the types of activities that are allowable uses of child survival funds. The guidance named specific types of activities that may be considered to fall under the child survival program and gave broad discretion to USAID officials to determine the proportion of funding that could be reported as child survival. The annual instruction manual for coding activities and special interests further specifies how activities are to be reported. According to agency guidance and instructions, some activities are automatically funded in their entirety as child survival. These activities are diarrheal disease control and related research, immunization and child-related vaccine research, child spacing/high-risk births, acute respiratory infection, vitamin A, breastfeeding promotion, growth monitoring and weaning foods, micronutrients, and orphans and displaced children. Other activities can be partially funded as child survival. USAID’s guidance stated that project managers could decide the percentage for the following activities that could be reported as child survival, even though suggested percentages were provided for some: health systems development; nutrition management, planning, and policy; other nutrition activities; health care financing; environmental health; vector control; water and sanitation; women’s health; and malaria research and control. 1994 Under-5 population (in millions) (continued) 1994 Under-5 population (in millions) (continued) 1994 Under-5 population (in millions) (continued) 1994 Under-5 population (in millions) (Table notes on next page) 1 USAID has a presence in the country and a child survival program. 2 USAID has a presence in the country, but no activities were attributed to child survival in fiscal year 1995. 3 USAID had no presence in the country and supported no mission-level programs, as of August 1996, although some funding may be provided through regional or other mechanisms. 4 USAID was legally restricted from operating in these countries as of 1996. The following is GAO’s comment on USAID’s letter dated July 23, 1996. 1. We have revised this report since the time we provided it to USAID for comment. As a result, there are some instances in which the information discussed in USAID’s letter is no longer included in our report. Richard Seldin The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the U.S. Agency for International Development's (AID) child survival activities and accomplishments, focusing on how child survival funds are being used to support AID objectives. GAO found that: (1) since 1985, AID has classified obligations totalling over $2.3 billion for activities in at least 83 countries as child survival; however, due to the way Congress directs funding to child survival, particularly since 1992, and AID's approach to tracking and accounting for such funds, it is not possible to determine precisely how much is actually being spent on child survival activities; (2) between 1985 and 1995, AID reported that it spent about $1.6 billion, or 67 percent of the child survival funds, for four types of activities: immunizations, diarrheal disease control, nutrition, and health systems development; (3) AID also reported that about 41 percent of the total amount identified as child survival has been used to address the three major threats to children under age 5 in the developing countries: diarrheal dehydration, acute respiratory infections, and vaccine-preventable diseases; (4) during GAO's field visits, it also noted that part of the cost of rehabilitating a railroad bridge and constructing a water tower in Mozambique and carrying out urban sewerage projects in Egypt were identified as child survival expenditures; (5) AID said the projects in Mozambique were critical for reducing child mortality because they supported access to water, food, and health services; (6) AID and other donors have made important contributions toward improving child mortality rates in many countries; (7) in 9 of the 10 countries receiving the most AID mission-level child survival assistance since 1985, mortality rates for children age 5 and under have dropped; (8) in addition, 5 of these 10 countries achieved mortality rates by 1994 of 70 or fewer deaths per 1,000 live births, a goal set for the year 2000 at the World Summit for Children; (9) both AID and independent evaluations have pointed out successes, such as collaboration with other donors to immunize children and promote oral rehydration therapy in the treatment of diarrheal disease; (10) in fiscal year 1995, AID's child survival funding was used in 17 countries that had an under-5 mortality rate of 70 or fewer deaths per 1,000 live births; (11) AID mission-level funding for child survival in these countries was $89.5 million, or 31 percent of the total child survival funding obligated in that year; (12) on the other hand, many countries that were far from achieving the goal, did not receive assistance for child survival; and (13) according to AID, most of these countries did not receive assistance because AID did not have a program in the country, had closed out assistance, or was in the process of closing out assistance due to budgetary or legal reasons or because sustainable development programs were not considered feasible.
This section describes (1) Y-12’s role in NNSA’s Nuclear Security Enterprise; (2) NNSA policy for setting program requirements; (3) best practices for program cost and schedule estimating; and (4) best practices for technology readiness. NNSA is responsible for managing national nuclear security missions: ensuring a safe, secure, and reliable nuclear deterrent; supplying nuclear fuel to the Navy; and supporting the nation’s nuclear nonproliferation efforts. NNSA directs these missions but relies on management and operating contractors to carry them out and manage the day-to-day operations at each of eight sites that comprise the agency’s nuclear security enterprise. These sites include laboratories, production plants, and a test site. Of NNSA’s eight sites, the Y-12 National Security Complex in Tennessee is the primary site with enriched uranium processing capabilities. Y-12’s primary mission is processing and storing uranium, processing nuclear fuel for the U.S. Navy, and developing technologies associated with those activities, including technologies for producing uranium-related components for nuclear warheads and bombs. Construction of the 811- acre Y-12 site began in 1943 as part of the World War II-era Manhattan Project. Y-12’s enriched uranium processing and storage capability is primarily housed in the following buildings: Building 9212: This building was constructed in 1945, at the end of World War II, and includes a number of support and storage facilities related to uranium purification and casting. According to a 2016 report from the DOE Office of Inspector General, all of the various support and storage facilities of Building 9212 contain radioactive and chemical materials in sufficient quantities that an unmitigated release would result in significant consequences. These facilities do not meet current safety requirements for such facilities in that they cannot withstand a seismic event, high wind event, or aircraft crash. The shutdown of Building 9212 operations that have the highest nuclear safety risk at Y-12 is a key NNSA uranium program goal. Because of these risks, according to NNSA officials, NNSA has substantially reduced the risks from high-hazard materials, such as enriched uranium in organic and aqueous solutions, with a focus on materials located in Building 9212. As such, according to these officials, the remaining material at risk in Building 9212 has been reduced to a level significantly below the facility’s administrative limit, and NNSA is implementing a four-phase exit strategy to systematically phase out mission dependency on Building 9212. The exit strategy includes actions necessary to remove material hold-up, complete all process relocations, transition personnel to the UPF, and complete post- operations cleanout of the facility, among other things, according to NNSA officials. Building 9215: This building was constructed in the 1950s and consists of three main structures. Specific activities in Building 9215 include fabrication activities, such as metal forming and machining operations for highly enriched uranium, low-enriched uranium, and depleted uranium. NNSA and others, such as the Defense Nuclear Facilities Safety Board, have raised concerns about the future reliability of the building, particularly as the amount of deferred maintenance in Building 9215 has steadily increased over the past several years. According to NNSA officials, NNSA’s contractor has hosted a series of technical evaluations that identified and prioritized needed infrastructure investments over the next 15 years, including within Building 9215, that are intended to ensure facility reliability through the 2040s. NNSA is reviewing these initial proposed investments. Building 9204-2E: This building, constructed in the late 1960s, is a three-story, reinforced concrete frame structure. Operations in this building include the assembly and disassembly of enriched uranium components with other materials. Also, according to NNSA officials, radiography capabilities have been successfully relocated out of Building 9212 and installed in Building 9204-2E. The design used for this facility predates modern nuclear safety codes. Building 9720-82 (also called the Highly Enriched Uranium Materials Facility): This building became operational in January 2010. Built to current safety standards, the facility provides long-term storage of enriched uranium materials and accepts the transfer of some legacy enriched uranium from older facilities. According to NNSA officials, as part of the uranium program NNSA transferred 12.3 metric tons of enriched uranium to this facility in fiscal year 2015, 9.8 metric tons in fiscal year 2016, and anticipates transferring 6 metric tons in fiscal year 2017. According to NNSA documents, Y-12’s enriched uranium operations have key shortcomings including (1) an inefficient workflow, (2) continually rising operations and maintenance costs due to facility age, and (3) hazardous processes that could expose workers to radiological contamination. To address these shortcomings, NNSA developed plans to replace aging infrastructure at Y-12 and relocate key processing equipment without jeopardizing uranium production operations. The first solution, proposed in 2004, envisioned relocating Y-12’s main uranium processing equipment into a new UPF. NNSA planned to construct this single, consolidated facility that would be less than half the size of existing facilities; reduce costs by using modern processing equipment; and incorporate features to increase worker protection and environmental health and safety. In 2007, NNSA estimated the UPF would cost from $1.4 billion to $3.5 billion to design and construct. In June 2012, the Deputy Secretary of Energy approved an updated cost estimate range for the UPF of from $4.2 billion to $6.5 billion. However, by August 2012, the UPF contractor concluded that the uranium processing and other equipment would not fit into the UPF as designed. In 2014, because of the high cost and schedule concerns of a solution focused solely on constructing new buildings, NNSA prepared a high- level strategic plan for its uranium program that is now focused on ceasing operations in building 9212 through a combination of new construction, infrastructure investments in existing facilities, upgrades to and relocation of select processing technologies, and improved inventory management. This strategy includes replacing certain 9212 capabilities, with continued operation of 9215 and 9204-2E, and removing a considerable amount of the scope of work that had been included in the original UPF plan (as the functions performed in Buildings 9215 and 9204-2E are no longer included within the UPF project). Figure 1 below depicts the planned transfer of uranium processing capabilities out of Building 9212 and into a new UPF and existing facilities by 2025 under the new approach. Under the new approach, the UPF is to provide less floor space, compared to the original UPF design, for casting, oxide production, and salvage and accountability of enriched uranium. NNSA has stated that this newly designed UPF is to be built by 2025 for no more than $6.5 billion through a series of seven subprojects. NNSA is required to manage construction of the new UPF in accordance with DOE Order 413.3B, which requires the project to go through five management reviews and approvals, called “critical decisions” (CD), as the project moves forward from planning and design to construction and operation. The CDs are as follows: CD 0: Approve mission need. CD 1: Approve alternative selection and preliminary cost estimate. CD 2: Approve the project’s formal scope of work, cost estimate, and schedule baselines. CD 3: Approve start of construction. CD 4: Approve start of operations or project completion. In March 2014, NNSA updated its Business Operating Procedure, clarifying its policy for developing and maintaining program requirements on construction programs and projects executed by the agency. According to this procedure, this program requirements policy is applicable to most projects constructed for NNSA or managed by NNSA personnel and that have an estimated total project cost of $10 million or greater, or the cost threshold determined appropriate by the Deputy Secretary of Energy. These projects include line item (capital asset) projects. According to NNSA’s Business Operating Procedure policy, program officials should establish the mission- and program-level requirements that apply to the development and execution of the program or project. The policy also states that program officials should translate the “need” in the Mission Need Statement into initial top-level requirements addressing such concerns as performance, supportability, physical and functional integration, security, test and evaluation, implementation, and quality assurance. The policy states that experience has shown that a formal process resulting in an agreed-upon definition of requirements for new systems, new capabilities, and updates or enhancements to systems is a prerequisite to proceeding to system or capability design. Furthermore, according to the policy, failure to do this results in rework and unnecessary costs and delays in schedule. NNSA policy states that Program Requirements Documents shall contain both mission and program requirements and should include the “objective” value—the desired performance, scope of work, cost, or schedule that the completed asset should achieve, as well as the “threshold” value—representing the minimum acceptable performance, scope of work, cost, or schedule that an asset must achieve. NNSA’s requirements policy also states that the development of mission requirements should include summary documentation on how the requirements were identified or derived and that the documentation should contain explanations of the processes, documentation, and direction or guidance that govern the derivation or development of the requirements. The policy also states that the basis for the requirements, where not obvious, should be traceable to decisions or source documentation and that details relating to the traceability of requirements may be included in an attachment to the program requirements document. NNSA’s uranium modernization efforts under the broader program have focused on establishing NNSA program requirements, which NNSA considers in determining its infrastructure plans. In July 2014, NNSA appointed a uranium program manager to integrate all of the uranium program’s elements. According to NNSA uranium program officials and documents, uranium program elements include construction of the new UPF; repairs and upgrades to existing facilities; uranium sustainment activities for achieving specific uranium production capabilities and inventory risk reduction (the strategic placement of high-risk materials in lower-risk conditions); depleted uranium management; and technology development, deployment, and process relocation. In March 2009, we published a cost estimating guide to provide a consistent methodology that is based on best practices and that can be used across the federal government for developing, managing, and evaluating capital program cost estimates. The methodology outlined in the guide is a compilation of best practices that federal cost estimating organizations and industry use to develop and maintain reliable cost estimates throughout the life of a government acquisition program. According to the cost estimating guide, developing accurate life-cycle cost estimates has become a high priority for agencies in properly managing their portfolios of capital assets that have an estimated life of 2 years or more. A life-cycle cost estimate provides an exhaustive and structured accounting of all resources and associated cost elements required to develop, produce, deploy, and sustain a particular program. According to the guide, a life-cycle cost estimate can be thought of as a “cradle to grave” approach to managing a program throughout its useful life. This entails identifying all cost elements that pertain to the program from initial concept all the way through operations, support, and disposal. A life-cycle cost estimate encompasses all past (or sunk), present, and future costs for every aspect of the program, regardless of funding source. According to the guide, a life-cycle cost estimate can enhance decision making, especially in early planning and concept formulation of acquisition, as well as support budget decisions, key decision points, milestone reviews, and investment decisions. The guide also states that a credible cost estimate reflects all costs associated with a system (program)—we interpret this to also mean that it must be based on a complete scope of work—and the estimate should be updated to reflect changes in requirements (which affect the scope of work). Because of the inherent uncertainty of every estimate due to the assumptions that must be made about future projections, once life-cycle costs are developed it is also important to continually keep them updated, according to the guide. We also published a schedule guide in December 2015—as a companion to the cost estimating guide—that identifies best practices for scheduling the necessary work. According to the schedule guide, a well-planned schedule is a fundamental management tool that can help government programs use funds effectively by specifying when work will be performed in the future and measuring program performance against an approved plan. Moreover, an integrated master schedule can show when major events are expected as well as the completion dates for all activities leading up to these events, which can help determine if the program’s parameters are realistic and achievable. An integrated master schedule may be made up of several or several hundred individual schedules that represent portions of effort within a program. These individual schedules are “projects” within the larger program. An integrated master schedule integrates the planned work, the resources necessary to accomplish that work, and the associated budget, and it should be the focal point for program management. Furthermore, according to the schedule guide, an integrated master schedule constitutes a program schedule that includes the entire required scope of work, including the effort necessary from all government, contractor, and other key parties for a program’s successful execution from start to finish. Conformance to this best practice—that the schedule should capture all activities or scope of work—logically leads to another key schedule best practice: the sequencing of all activities. This best practice states that activities must be listed in the order in which they are to be carried out and be joined with logic. Consequently, developing a complete scope of work or knowing all of the activities necessary to accomplish the project’s objectives is critical to adhering to these best practices. In other words, a schedule is not complete and reliable if significant portions of the scope of work are not yet developed or are still uncertain, including over the longer term. In prior reports from February 2014, November 2014, and August 2016, we included recommendations concerning NNSA’s development of life- cycle cost estimates or an integrated master schedule for certain projects and programs, as called for in our cost estimating and schedule best practice guides. Specifically: In February 2014, we recommended that to develop reliable cost estimates for its plutonium disposition program, among other things, the Secretary of Energy should direct the NNSA office responsible for managing the program to, as appropriate, revise and update the program’s life-cycle cost estimate following the 12 key steps described in our Cost Estimating Guide for developing high-quality cost estimates. In our November 2014 report, we recommended that to enhance NNSA’s ability to develop reliable cost estimates for its projects and for its programs that have project-like characteristics, the Secretary of Energy should revise DOE directives that apply to programs to require that DOE and NNSA and its contractors develop cost estimates in accordance with the 12 cost estimating best practices, including developing life-cycle cost estimates for programs. In August 2016, regarding the preparation of integrated master schedules, we recommended that to ensure that NNSA’s future schedule estimates for the revised Chemistry and Metallurgy Research Replacement project—a key element of NNSA’s plutonium program—provide the agency with reasonable assurance regarding meeting the project’s completion dates, the Secretary should direct the Under Secretary for Nuclear Security, in his capacity as the NNSA Administrator, to develop future schedules for the revised project that are consistent with current DOE project management policy and scheduling best practices. Specifically, the Under Secretary should develop and maintain an integrated master schedule that includes all project activities under all subprojects prior to approving the project’s first CD-2 decision. The agency generally agreed with these recommendations and has initiated various actions intended to implement them, including revising certain DOE orders, but it has not completed all actions needed to fully address the recommendations. To ensure that new technologies are sufficiently mature in time to be used successfully, NNSA uses a systematic approach—Technology Readiness Levels (TRL)—for measuring the technologies’ technical maturity. TRLs were pioneered by the National Aeronautics and Space Administration and have been used by the Department of Defense and other agencies in their research and development efforts for several years. As shown in table 1, TRLs start with TRL 1, which is the least mature, and go through TRL 9, the highest maturity level and at which the technology as a total system is fully developed, integrated, and functioning successfully in project operations. In November 2010, when NNSA’s original approach was to consolidate Y- 12’s uranium processing capabilities into a single large facility, we reported that NNSA did not expect to have optimal assurance as defined by TRL best practices that 6 of the 10 new technologies being developed for construction of the new UPF would work as intended before project critical decisions are made. Our November 2010 report also concluded that because all of the technologies being developed for construction of the new UPF would not achieve optimal levels of readiness prior to project critical decisions, NNSA might lack assurance that all technologies would work as intended. The report further stated that this could force the project to revert to existing or alternate technologies, which could result in design changes, higher costs, and schedule delays. In September 2011, DOE issued a technology readiness assessment guide for the agency, which states that new technologies should reach TRL 6 by CD 2, when the scope of work, cost estimate, and schedule baselines are to be approved. The guide also encouraged project managers to reach TRL 7 prior to CD 3, or when the start of construction is approved. In April 2014, we provided additional information on technology development efforts for the UPF and identified five additional technology risks since our November 2010 report. In May 2016, DOE strengthened TRL requirements and updated DOE Order 413.3B, Program and Project Management for the Acquisition of Capital Assets, which states that project managers shall reach TRL 7 prior to CD 2 for major system projects or first-of-a-kind engineering endeavors. In August 2016, we provided an exposure draft to the public to obtain input and feedback on our technology readiness guide, which identifies best practices for evaluating the readiness of technology for use in acquisition programs and projects. NNSA documents we reviewed and program officials we interviewed indicate that NNSA has made progress in developing a revised scope of work, cost estimate, and schedule for the new UPF, potentially stabilizing escalating project costs and technical risks experienced under the previous strategy. According to NNSA’s 2014 high-level strategic plan for the uranium program, NNSA changed its strategy for managing the overall uranium program, including the UPF, that year, which resulted in the need to develop a new scope of work. NNSA has reduced the scope of work for construction of the new UPF—the most expensive uranium program element—as a result of key adjustments NNSA had made to program requirements. For example, NNSA’s October 2014 revision of program requirements for construction of the new UPF resulted in the following changes: NNSA modified the processing capability for casting uranium initially intended for construction of the new UPF, which then allowed the agency to scale back certain capabilities envisioned for the facility, potentially reducing project costs. NNSA significantly simplified processing capabilities and reduced critical technologies needed for construction, due to the reduction in the scope of work for the new UPF from the 10 technologies the agency planned to use prior to 2014 to 3, according to program officials. NNSA officials told us that this change was needed to help control escalating costs and technical risks. NNSA integrated graded security and safety factors into the new UPF design, which resulted in cost savings and schedule improvement for the UPF project, according to agency officials. According to NNSA’s fiscal years 2017 and 2018 budget requests, NNSA expects to approve formal scope of work, cost, and schedule baseline estimates for construction of the new UPF as the designs for the Main Process and Salvage and Accountability Buildings subprojects—the two largest subprojects—reach at least 90 percent completion, which is consistent with DOE’s order on project management for construction of these types of facilities. According to NNSA’s fiscal year 2017 and 2018 budget requests, construction of the new UPF will occur in distinct phases, by key subproject. The seven key subprojects are as follows: Main Process Building Subproject: This subproject includes construction of the main nuclear facility that contains casting and special oxide production. Support structures include a secure connecting portal to the Highly Enriched Uranium Materials Facility. Salvage and Accountability Building Subproject: This subproject includes work intended to construct a facility for handling chemicals and wastes associated with uranium processing, as well as decontamination capabilities, among other things. Mechanical Electrical Building Subproject: This subproject includes work intended to provide a building for mechanical, electrical, heating, ventilating, air conditioning, and utility equipment for the Main Process and Salvage and Accountability buildings. Site Infrastructure and Services Subproject: This subproject includes work intended for demolishment, excavation, and construction of a parking lot, security portal, and support building. Process Support Facilities Subproject: This subproject includes work intended to provide chilled water and chemical and gas supply storage for the UPF. Substation Subproject: This subproject includes work intended to provide power to the new UPF and additional capacity for the remainder of the Y-12 Plant. Site Readiness Subproject: This subproject included work to relocate Bear Creek Road and construct a new bridge and haul road. As of May 2017, NNSA had developed and approved a revised formal scope of work, cost, and schedule baseline estimates for four of the seven subprojects. NNSA expects to approve such baseline estimates for the all of the remaining subprojects—including the two largest subprojects—by the second quarter of fiscal year 2018. NNSA also plans to validate the estimates through an independent cost estimate at that time. Concurrently with its approval and validation of the formal baseline estimates—which constitutes CD 2 in NNSA’s project management process—NNSA intends to approve the start of construction, which constitutes CD 3 in that process. Table 2 shows estimated or approved time frames for CD 2, 3, and 4 milestones, as well as preliminary or (where available) formal cost baseline estimates for each subproject. NNSA has not developed a complete scope of work, life-cycle cost estimate, or integrated master schedule for its overall uranium program, and it has no time frame for doing so. In particular, it has not developed a complete scope of work for repairs and upgrades to existing facilities, nor has it done so for other key uranium program elements. Therefore, NNSA does not have the basis to develop a life-cycle cost estimate or an integrated master schedule for its overall uranium program. NNSA has not developed a complete scope of work to repair and upgrade existing facilities for the overall uranium program, even though these activities could be among the most expensive and complicated non- construction portions of the uranium program. According to a July 2014 memorandum from the NNSA Administrator, the uranium program manager is expected to, among other things, identify the scope of work of new construction and infrastructure repairs and upgrades to existing facilities necessary to support the full uranium mission. NNSA is still evaluating a November 2016 initial implementation plan, proposed by the Y-12 contractor, for the repairs and upgrades that broadly outlines the scope of work. We found that some areas of the scope of work are more fully defined than others. For example, NNSA’s implementation plan identifies the scope of work to conduct electrical power distribution repairs and upgrades in buildings 9215 and 9204-2E—which were constructed in the 1950s and 1960s, respectively—beginning in fiscal year 2017. However, NNSA does not have a complete scope of work to serve as the basis for its $400 million estimate. Officials we interviewed said that the agency intends to develop each year the complete and detailed scope of work to be done in the following year or two, including the work related to infrastructure investment. We also found that one significant area of the scope of work that has not been developed concerns repairs and upgrades to address certain safety issues confirmed by the Defense Nuclear Facilities Safety Board. For example, according to the board’s February 2015 letter to NNSA, earthquakes or structural performance problems in Buildings 9215 and 9204-2E could contribute to an increased risk for structural collapse and release of radiological material. NNSA officials said they have not fully developed the long-term scope of work to address the safety issues that the board confirmed because much of this work depends on the results of upcoming seismic and structural assessments the agency expects to be conducted in or after fiscal year 2018. According to these officials, the need for these assessments was not apparent until after 2014, when NNSA decided to rely, in part, on aging existing facilities to meet uranium program requirements. NNSA then had to adjust plans in alignment with the new circumstances that required repairs and upgrades to these facilities. According to NNSA program officials, the planned infrastructure repairs and upgrades will address many, but not all, of the safety issues identified by the board. For example, NNSA program officials stated that they do not expect building 9215, which it expects to be in operation through the late 2030s, to meet all modern safety standards even with planned upgrades. NNSA officials also stated that planned upgrades have not been finalized and will focus on the upgrades that balance cost and risks. Other aspects of the scope of work for repairs and upgrades have been developed but may not be stable because NNSA continues to review and adjust program requirements that affect the scope of work. For example, during our examination of how NNSA established uranium purification requirements, NNSA program officials told us that they identified a more accurate program requirement for purified uranium that increased the required annual processing throughput capability for purified uranium from 450 to 750 kilograms. As a result, in August 2016, NNSA program officials told us that NNSA will need to add to the capacity of the equipment to be installed in Building 9215 to convert uranium that contains relatively high amounts of impurities, such as carbon, into a more purified form—increasing the scope of work for this upgrade. The uranium program manager told us that, in an effort to make the requirement more accurate, NNSA changed its approach to determining the requirement so that it relied less on historical data and more on data on the purification levels of uranium inventories on hand, among other considerations. This program manager also told us that accurate and stable program requirements establish the basis for the infrastructure and equipment that will be needed to meet program goals, such as processing uranium for nuclear components necessary to meet nuclear weapons stockpile needs. The ongoing review of program requirements, with minor adjustments, is expected and necessary to ensure accuracy, according to NNSA officials. We also found that NNSA has not developed complete scopes of work for other uranium program elements, including uranium sustainment activities, depleted uranium management, and technology development, based on our review of documents and discussion with NNSA officials. NNSA officials we interviewed told us that NNSA is working to develop these scopes of work, but the agency has no time frames for completion. We determined that NNSA has not yet developed the complete scope of work for activities to reduce the risk associated with and sustain its uranium inventory, based on our review of program documents and interviews with NNSA program officials. These activities include efforts to remove higher-risk materials from higher-risk conditions and strategically place them in lower-risk conditions. For example, NNSA expects to reprocess the uranium contained in organic solutions, which is a relatively higher-risk form of uranium storage, for repackaging and eventual removal from deteriorating, higher-risk buildings, such as Building 9212. These reprocessed materials and other materials that are more easily repackaged, such as nuclear components from dismantled nuclear weapons, are expected to be relocated to lower-risk storage areas, such as the Highly Enriched Uranium Materials Facility, which became operational in 2010. NNSA program officials told us that they have developed a detailed scope of work for the removal of higher-risk materials from some Y-12 areas but have not developed the complete scope of work for their removal from other facilities or for transferring these materials to the storage facility or other interim locations. NNSA officials we interviewed told us that the agency recognized in December 2015 that requirements for depleted uranium were incomplete, which could affect the scope of work for meeting these requirements. In December 2015, NNSA completed its initial analysis of depleted uranium needs, by weapon system, to determine potential gaps in material availability in the future. This initial analysis was an important first step in defining requirements for depleted uranium, but the program element is in an early stage of development, according to NNSA program officials. According to NNSA officials, NNSA is developing the scope of work necessary to sustain depleted uranium capabilities and infrastructure at Y-12, and it is evaluating strategies to procure or produce additional feedstock of high-purity depleted uranium to support production needs. NNSA’s broad strategy to replace Building 9212 capabilities by 2025— through plans for the construction of a new UPF under a reduced scope of work—currently involves plans to install new uranium processing capabilities in other existing Y-12 buildings, including Buildings 9215 and 9204-2E, and will rely on developing and installing new technologies. Two of the uranium processing technologies—calciner and electrorefining— are at later stages of development, and the scope of work needed to bring them to full maturity is relatively straightforward, according to NNSA program officials. One technology—chip processing—is less mature, but the remaining activities necessary to potentially develop it to full maturity have been determined, according to NNSA program officials. Also, according to these officials, for one technology that has been deferred, the remaining activities necessary to develop it to full maturity are less clear. Calciner technology enables the processing of certain uranium- bearing solutions into a dry solid so that it can be stored pending further processing in the future. According to a NNSA uranium program official, NNSA had determined as of May 2015 that the calciner technology had reached TRL 6—the level required prior to CD 2 (when scope of work, cost, and schedule baselines are to be approved) under DOE’s technology readiness guide. After finishing calciner equipment installation in Building 9212 and project completion, expected in fiscal year 2022, NNSA plans to conduct a readiness review to demonstrate that the technology meets TRL 8 (meaning that it has been tested and demonstrated), according to a NNSA uranium program official. Electrorefining technology applies a voltage that drives a chemical reaction to remove impurities from uranium. According to NNSA documents, using this technology eliminates various hazards associated with current chemical purification processes, such as using hydrogen fluoride and certain solvents, and allows a 4-to-1 reduction in square footage to operate compared with existing technologies. As of December 2015, NNSA had determined that the electrorefining technology had reached TRL 6, according to a key NNSA program official. After finishing electrorefining equipment installation in Building 9215 and project completion, expected in fiscal year 2022, NNSA plans to conduct a readiness review to demonstrate that the technology meets TRL 8, according to a NNSA uranium program official. Direct electrolytic reduction technology could convert uranium oxide to uranium metal using an electrochemical process similar, but not identical, to electrorefining. It was assessed at TRL 4 as of September 2014. According to NNSA program officials, NNSA may pursue direct electrolytic reduction technology as a follow-on to electrorefining, but NNSA has not determined whether there is a mission need for this technology. Currently, NNSA has deferred funding for it until fiscal year 2019. Chip processing technology converts enriched uranium metal scraps from machining operations into a form that can be re-used. This technology is already in use, but NNSA is investigating improved technology to potentially simplify the process and reduce the number of chip processing steps, according to NNSA program officials. As of July 2016, NNSA had determined that the new technology had reached TRL 5, and the agency plans to reach TRL 6 by June 2017. Because NNSA has not developed a complete scope of work for the overall uranium program, it does not have the basis to develop a life-cycle cost estimate or an integrated master schedule for the program. As noted previously, NNSA has made progress in developing a cost estimate for the new UPF, and this estimate will be an essential component of a life- cycle cost estimate for the overall program. For other program elements, discussed below, NNSA either has rough or no estimates of the total costs. According to our analysis of information from NNSA documents and program officials, these program elements may cost nearly $1 billion over the next 2 decades. Repairs and upgrades to existing facilities: NNSA’s contractor’s implementation plan includes a rough-order-of-magnitude cost estimate of $400 million over the next 20 years—roughly $20 million per year—for repairs and upgrades to existing facilities. Uranium sustainment activities for achieving inventory risk reduction: Activities to reduce the risk associated with and sustain NNSA’s uranium inventory are expected to cost roughly $25 million per year in fiscal years 2017 through 2025 for a total of around $225 million, according to NNSA program officials. Depleted uranium management: NNSA has not estimated costs for meeting depleted uranium needs for weapons systems. Current costs related to managing depleted uranium are broadly shared among various NNSA program areas. NNSA is exploring options and costs of increasing the supply of depleted uranium to meet NNSA needs. Technology development: Estimated costs for development of technology to be installed in existing Y-12 buildings are roughly $30 million per year in fiscal years 2017 through 2025, for a total of around $270 million, according to NNSA program officials. Our cost estimating guide states that a credible cost estimate reflects all costs associated with a system (program)—i.e., it must be based on a complete scope of work—and that the estimate should be updated to reflect changes in requirements (which affect the scope of work). Because NNSA has not developed the complete scope of work for each program element and the overall uranium program, NNSA does not have the basis for preparing a credible life-cycle cost estimate for the program. Having a life-cycle cost estimate can enhance decision making, especially in early planning and concept formulation of acquisition, as well as support budget decisions, key decision points, milestone reviews, and investment decisions, according to our cost estimating guide. For the uranium program, a life-cycle cost estimate could better inform decision making, including by Congress. Uranium program managers indicated that they plan to eventually develop a life-cycle cost estimate for the overall uranium program, but they have no time frame for doing so and said that it may take several years. In addition, NNSA has not developed an integrated master schedule for its uranium program as called for in our schedule guide. An integrated master schedule for the uranium program would need to include individual schedules that represent portions of effort within the program— that is, program elements. As noted earlier, NNSA has made progress in developing a schedule for the UPF project and expects to complete development of schedule baselines for all UPF subprojects in 2018; this schedule information will be an essential component of an integrated master schedule for the overall program. For other program elements, however, NNSA does not have a basis to develop a complete schedule because, as discussed above, NNSA has not developed a complete scope of work. NNSA’s program guidance recommends development of an integrated master schedule and states that having one supports effective management of program scope, risk, and day-to-day activities. Specifically, the guidance states that during the initial phases of a program, an integrated master schedule provides an early understanding of the required scope of work, key events, accomplishment criteria, and the likely program structure by depicting the progression of work through the remaining phases. Furthermore, it communicates the expectations of the program team and provides traceability to the management and execution of the program. However, NNSA’s guidance does not always explicitly require the development of such a schedule—the guidance allows for the tailoring of the agency’s management approach based on the particular program being managed. Uranium program managers indicated that they plan to eventually develop an integrated master schedule for the uranium program but were uncertain when this schedule may be developed. In the meantime, NNSA plans to spend tens of millions of dollars annually on uranium program activities—including $20 million per year for repairs and upgrades to existing buildings—without providing decision makers with an understanding of the complete scope of work, key events, accomplishment criteria, and the likely program structure. Under federal standards for internal control, management should use quality information to achieve the entity’s objectives, and, among other characteristics, quality information is provided on a timely basis. Without NNSA setting a time frame for when it will (1) develop the complete scope of work for the overall uranium program, to the extent practicable, and (2) prepare a life-cycle cost estimate and integrated master schedule, NNSA does not have reasonable assurance that decision makers will have timely access to essential program management information—risking unforeseen cost escalation and delays in NNSA’s efforts to meet the nation’s uranium needs. NNSA is making efforts to modernize uranium processing capabilities that are crucial to our nation’s ability to maintain its nuclear weapons stockpile and fuel its nuclear-powered naval vessels. NNSA’s modernization efforts will likely cost several billions of dollars and take at least 2 decades to execute. As part of these efforts, NNSA is planning to construct a new UPF, using a revised approach intended to help control escalating costs and schedule delays. NNSA has made progress in developing a scope of work, cost estimates, and schedules for the new UPF. However, the success of the new UPF approach, which relies on support capabilities outside of the new UPF project, depends on the successful completion and integration of many other projects and activities that comprise the overall uranium program, including repairs and upgrades to existing Y-12 facilities needed for housing uranium processing capabilities. NNSA has not developed a complete scope of work for its overall uranium program, nor has it set a time frame for doing so. In the interim, NNSA cannot adhere to best practices, such as developing a credible life-cycle cost estimate or an effective long-term, integrated master schedule for the program because of gaps in information about future activities and their associated costs. Without NNSA setting a time frame for when it will (1) develop a complete scope of work for the overall uranium program, to the extent practicable, and (2) prepare a life-cycle cost estimate and an integrated master schedule for the program, NNSA does not have reasonable assurance that decision makers will have timely access to essential program management information for this costly and important long-term program. We recommend that the NNSA Administrator set a time frame for when the agency will (1) develop the complete scope of work for the overall uranium program to the extent practicable and (2) prepare a life-cycle cost estimate and an integrated master schedule for the overall uranium program. We provided a draft of this report to DOE and NNSA for their review and comment. NNSA provided written comments, which are reproduced in full in appendix II, as well as technical comments, which we incorporated in our report as appropriate. In its comments, NNSA generally agreed with our recommendation. NNSA stated that the recommendation reflects the logical next steps in any program’s maturity and is consistent with its existing planning goals. NNSA further stated that while it is too early to have developed full scope and cost estimates for the entire program at this point, it fully intends to implement the recommendation at the appropriate times in the uranium program’s continuing development. In particular, NNSA stated that it is developing a complete scope of work, which is necessary for a fully informed program cost estimate, and anticipates this to be a multiyear effort. Regarding cost estimates, NNSA said that initial cost estimates it develops will continue to reflect strategies and emerging risks over the course of the Future Years Nuclear Security Plan—a 5-year plan typically used as part of the basis for NNSA congressional budget requests for each fiscal year. NNSA stated that once stable implementation plans are developed for its activities, it will consider whether there is value in further extending the time frame for estimates. NNSA further stated that it plans to complete an initial coordinated program schedule by December 31, 2018, and that the schedule would continue to be updated as plans and strategies evolve. NNSA also provided additional examples to illustrate the program’s progress in improving safety, relocating processes, improving infrastructure, and construction of the UPF, among other things. We incorporated several of these examples in the report where appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Administrator of the National Nuclear Security Administration, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To describe the status of the National Nuclear Security Administration’s (NNSA) efforts to develop a revised scope of work, cost estimate, and schedule for the new Uranium Processing Facility (UPF) project, we reviewed NNSA program planning documents, and any updates, concerning cost and budget and interviewed agency officials to determine the effect of uranium program strategy revisions on the UPF project’s scope of work, cost, and schedule. To examine the scope of work for the UPF project, which directly impacts the project’s cost and schedule, we reviewed NNSA business operating procedures for developing program requirements and the steps taken to identify and update requirements, which would apply to the construction of the new UPF. We interviewed program officials to understand how they defined and adjusted program requirements and to understand the potential effects of any adjustments on NNSA’s infrastructure plans. For example, NNSA officials stated that they followed key portions of the applicable Business Operating Procedure (BOP) regarding program requirements for construction projects. As such, we reviewed those portions of BOP-06.02 that the officials stated were applicable, including stipulations that requirements include the “threshold” value (the minimum acceptable performance, scope of work, cost, or schedule that construction of the new UPF must achieve), and “objective” value (the desired performance, scope of work, cost, or schedule that the new UPF should achieve). To review project requirements for the construction of the new UPF, we reviewed copies of the most recent requirements revision documents—NNSA’s project and program requirements documents. Specifically, we reviewed the requirements to determine whether requirements for the construction of the new UPF specified both threshold and objective requirements. To examine the extent to which NNSA has developed a complete scope of work, life-cycle cost estimate, and integrated master schedule for the overall uranium program, we reviewed NNSA program-planning documents concerning cost and budget and interviewed NNSA’s program manager and other program and contractor officials. We examined information regarding the broader uranium program, including NNSA’s efforts to repair and upgrade existing Y-12 facilities and other key uranium program elements. Specifically, to examine the scope of work for key elements of the overall uranium program—this scope of work directly impacts the program’s cost and schedule—we reviewed NNSA planning, strategy, and implementation-related documents for the program. We reviewed NNSA business operating procedures for developing program requirements and the steps taken to identify and update requirements for unique processing capabilities to be housed in existing facilities external to the UPF. We interviewed program officials to understand how they defined and adjusted program requirements and to understand the potential effects of any adjustments on NNSA’s infrastructure plans. In particular, we reviewed requirements external to the construction of the new UPF that were determined to be critical in meeting key program goals, according to NNSA officials, such as uranium purification requirements. We interviewed officials to determine the approach/process used for requirement-setting, the data used, and how NNSA analyzed the data. In addition, we reviewed detailed program planning documents, such as the Y-12 Enriched Uranium Facility Extended Life Program Report and Highly Enriched Uranium Mission Strategy Implementation Plan to learn about the infrastructure repairs and upgrades NNSA identified it needs to meet facility safety and other requirements. To obtain the views of independent subject matter experts on the structural, seismic, and safety condition of existing Y-12 facilities, we reviewed the Defense Nuclear Facilities Safety Board 2014 report that addressed the subject and that included conclusions and recommendations. In September 2016, we also spoke with board officials to determine if there were updates, additions, or changes to its letter; the officials said there were none and that the Y-12 facility structural concerns expressed in the 2014 letter remain. To further examine the estimated cost and schedule for the overall uranium program from a broader perspective, we gathered and analyzed information regarding the extent to which NNSA has developed a life- cycle cost estimate and an integrated master schedule as called for in best practices. We reviewed best practices for cost and schedule as described in our Cost Estimating Guide and Schedule Guide. For the cost estimating guide, GAO cost experts established a consistent methodology that is based on best practices that federal cost estimating organizations and industry use to develop and maintain reliable cost estimates. Developing a life-cycle cost estimate and an integrated master schedule for the overall program are critical to successfully managing a program. We identified the benefits of using these best practices and interviewed program officials to obtain information on the status of their adherence to these best practices in managing the overall uranium program. In addition to the individual named above, Jonathan Gill (Assistant Director), Martin Campbell, Antoinette Capaccio, Jennifer Echard, Cynthia Norris, Christopher Pacheco, Sophia Payind, Timothy M. Persons, Karen Richey, Jeanette Soares, and Kiki Theodoropoulos made significant contributions to this report. Program Management: DOE Needs to Develop a Comprehensive Policy and Training Program. GAO-17-51. Washington, D.C.: November 21, 2016. DOE Project Management: NNSA Needs to Clarify Requirements for Its Plutonium Analysis Project at Los Alamos. GAO-16-585. Washington, D.C.: Aug. 9, 2016. Modernizing the Nuclear Security Enterprise: NNSA’s Budget Estimates Increased but May Not Align with All Anticipated Costs. GAO-16-290. Washington, D.C.: March 4, 2016. Modernizing the Nuclear Security Enterprise: NNSA Increased Its Budget Estimates, but Estimates for Key Stockpile and Infrastructure Programs Need Improvement. GAO-15-499. Washington, D.C.: August 6, 2015. DOE and NNSA Project Management: Analysis of Alternatives Could Be Improved by Incorporating Best Practices. GAO-15-37. Washington, D.C.: December 11, 2014. Project and Program Management: DOE Needs to Revise Requirements and Guidance for Cost Estimating and Related Reviews. GAO-15-29. Washington, D.C.: Nov. 25, 2014. Department of Energy: Interagency Review Needed to Update U.S. Position on Enriched Uranium That Can Be Used for Tritium Production. GAO-15-123. Washington, D.C.: October 14, 2014. Nuclear Weapons: Some Actions Have Been Taken to Address Challenges with the Uranium Processing Facility Design. GAO-15-126. Washington, D.C.: October 10, 2014. Nuclear Weapons: Technology Development Efforts for the Uranium Processing Facility. GAO-14-295. Washington, D.C.: April 18, 2014. Plutonium Disposition Program: DOE Needs to Analyze the Root Causes of Cost Increases and Develop Better Cost Estimates. GAO-14-231. Washington, D.C.: Feb. 13, 2014. Nuclear Weapons: Information on Safety Concerns with the Uranium Processing Facility. GAO-14-79R. Washington, D.C.: October 25, 2013. Nuclear Weapons: Factors Leading to Cost Increases with the Uranium Processing Facility. GAO-13-686R. Washington, D.C.: July 12, 2013. Nuclear Weapons: National Nuclear Security Administration’s Plans for Its Uranium Processing Facility Should Better Reflect Funding Estimates and Technology Readiness. GAO-11-103. Washington, D.C.: November 19, 2010.
Uranium is crucial to our nation's ability to maintain its nuclear weapons stockpile. NNSA processes uranium to meet this need. In 2004, NNSA began plans to build a new UPF that would consolidate capabilities currently housed in deteriorating buildings; by 2012, the project had a preliminary cost of $4.2 billion to $6.5 billion. To control rising costs, NNSA changed its approach in 2014 to reduce the scope of the new UPF and move uranium processing capabilities once intended for the UPF into existing buildings. The broader uranium program also includes the needed repairs and upgrades to these existing buildings. The National Defense Authorization Act for Fiscal Year 2013 as amended includes a provision for GAO to periodically assess the UPF. This is the fifth report and (1) describes the status of NNSA's efforts to develop a revised scope of work, cost estimate, and schedule for the UPF project, and (2) examines the extent to which NNSA has developed a complete scope of work, life-cycle cost estimate, and integrated master schedule for the overall uranium program. GAO reviewed program documents on planning, strategy, cost, and implementation and interviewed program officials to examine the program's scope, cost and schedule. The National Nuclear Security Administration (NNSA) has made progress in developing a revised scope of work, cost estimate, and schedule for its project to construct a new Uranium Processing Facility (UPF), according to NNSA documents and program officials. As of May 2017, NNSA had developed and approved a revised formal scope of work, cost, and schedule baseline estimates for four of the seven subprojects into which the project is divided. NNSA expects to approve such baseline estimates for the other three—including the two largest subprojects—by the second quarter of fiscal year 2018. NNSA also plans to validate the estimates by then through an independent cost estimate. NNSA, however, has not developed a complete scope of work, life-cycle cost estimate (i.e., a structured accounting of all cost elements for a program), or integrated master schedule (i.e., encompassing individual project schedules) for the overall uranium program, and it has no time frame for doing so. In particular, it has not developed a complete scope of work for repairs and upgrades to existing buildings in which NNSA intends to house some uranium processing capabilities and has not done so for other key program elements. For example: The scope of work for a portion of the upgrades and repairs will not be determined until after fiscal year 2018, when NNSA expects to conduct seismic and structural assessments to determine what work is needed to address safety issues in existing buildings. NNSA has developed an initial implementation plan that roughly estimates a cost of $400 million over the next 20 years for the repairs and upgrades, but a detailed scope of work to support this estimate is not expected to be fully developed except on an annual basis in the year(s) that immediately precedes the work. Because NNSA has not developed a complete scope of work for the overall uranium program, it does not have the basis to develop a life-cycle cost estimate or an integrated master schedule. Successful program management depends in part on developing a complete scope of work, life-cycle cost estimate, and an integrated master schedule, as GAO has stated in its cost estimating and schedule guides. In previous work reviewing other NNSA programs, GAO has found that when NNSA did not have a life-cycle cost estimate based on a complete scope of work, the agency could not ensure its life-cycle cost estimate captured all relevant costs, which could result in cost overruns. The revised cost estimate that NNSA is developing for the new UPF will be an essential component of a life-cycle cost estimate for the overall program. However, for other program elements, NNSA has either rough or no estimates of the total costs and has not set a time frame for developing these costs. Federal internal control standards call for management to use quality information to achieve an entity's objectives, and among other characteristics, such information is provided on a timely basis. Without setting a time frame to complete the scope of work and prepare a life-cycle cost estimate and integrated master schedule for the program, NNSA does not have reasonable assurance that decision makers will have timely access to essential program management information—risking unforeseen cost escalation and delays. GAO recommends that NNSA set a time frame for completing the scope of work, life-cycle cost estimate, and integrated master schedule for the overall uranium program. NNSA generally agreed with the recommendation and has ongoing efforts to complete these actions.
Public elementary and secondary education is primarily a state and local government responsibility, although the federal government provides supplementary funds to public schools for a variety of purposes, including grants for disadvantaged students, special education students, and teacher improvement. The federal government provided about 8 percent of funding for public education in school year 2005-2006. The allocation of federal funds reflects a concern with student outcomes as evidenced by the Elementary and Secondary Education Act of 1965, as amended, which has the goal of ensuring that all children have a fair, equal, and significant opportunity to obtain a high-quality education. The No Child Left Behind Act of 2001 (NCLBA), which reauthorized and amended ESEA, requires school districts to make improvements when they fail to make adequate yearly progress in raising student achievement. The federal government has historically provided for the education of Indian children in part through the Department of the Interior’s Bureau of Indian Affairs. Interior’s Bureau of Indian Education, previously a part of the Bureau of Indian Affairs, funds 170 schools serving students living on Indian lands; however, most Indian students now attend public schools. In some cases, these schools and Indian Impact Aid schools are in the same communities, and students may transfer from one to the other. Among some 580,000 Indian children who attend public elementary and secondary schools in the United States, about one-third of them are enrolled in Indian Impact Aid school districts. An estimated 45,000 Indian students attend Bureau of Indian Education schools. The remaining Indian children attend other public schools or private schools. Congress established the Impact Aid program in 1950 to assist public school districts that have lost property tax revenue due to the presence of tax-exempt federal property, or that have experienced increased costs due to the enrollment of federally connected children, including children living on Indian lands, military bases, or other federal lands for which school districts receive no tax revenue. Public school districts qualify for and receive Impact Aid, in part, on the basis of the number of federally connected students they serve, such as those who reside on military bases, Indian lands, or other federal lands, or others who have parents in the military or who work on federal lands. The largest component of the Impact Aid program is basic support payments, which provided about $1 billion for fiscal year 2008 to about 1,200 public school districts, including about $520 million to 567 Indian Impact Aid school districts for students living on Indian lands in 27 states. (See app. II for preliminary fiscal year 2009 data.) School districts eligible for Impact Aid decide how to use these funds. For example, they may use these funds for costs associated with teacher salaries and benefits; transportation; textbooks; and facility maintenance, repair, renovation, and construction. Some districts also hold a portion of these funds in reserve for use in future years. To be eligible for basic support payments for having students living on Indian lands, a school district must have at least 400 federally connected students, or these students must comprise at least 3 percent of their total number of students. The method for determining Indian Impact Aid basic support payments provides more funding per federally connected student in school districts where these students are a larger share of the total number of students and the basic support payments represent a larger share of current school district expenditures. For Indian Impact Aid school districts, the average amount of this basic support per student living on Indian lands was $4,534 in fiscal year 2008. After adjusting for inflation, this average rose 7 percent from fiscal years 2002 to 2005 and has subsequently fallen back to about fiscal year 2002 levels. The Impact Aid program also includes funding for construction, through both a formula grant program and a competitive grant program for school districts with high percentages of children living on Indian lands or high percentages of children who have a parent on active military duty. Congress provided about $17.8 million to the formula grant program in both fiscal years 2006 and 2007, but no funding for fiscal years 2008 or 2009. Formula grants are restricted to Impact Aid school districts with at least 50 percent of students living on Indian land or at least 50 percent of students who have a parent on active military duty. The competitive construction grant program did not receive any funding in fiscal years 2006 or 2007, but received approximately $17 million for fiscal years 2008 and 2009. These grants are for school facility emergencies and modernization and are restricted to school districts with at least 40 percent of students living on Indian lands or at least 40 percent of students who have a parent on active military duty. The competitive grant program to date has provided funding only for emergency repairs. In July 2009, this program awarded grants from the fiscal year 2008 appropriation—totaling about $17 million—to 13 Indian Impact Aid school districts. The American Recovery and Reinvestment Act of 2009 (Recovery Act) appropriated $100 million for construction projects by Impact Aid school districts. The Recovery Act requires that Education provide nearly $40 million of this appropriation as formula grants and nearly $60 million as competitive grants. The Recovery Act also provides a $53.6 billion State Fiscal Stabilization Fund, some of which may be available to provide funding to school districts, including Indian Impact Aid school districts, for a variety of purposes (e.g., modernizing, renovating, or repairing public school facilities). Building and maintaining sound school facilities is important not only to provide a safe and healthy learning environment, but to avoid costly repairs or replacements. Facility managers who routinely assess the condition of their facilities can identify problems at their earliest stages and evaluate buildings for future maintenance and repair needs. Facility assessments take a variety of forms, from staff walking through a facility and visually inspecting its condition and identifying repair and maintenance issues to a more comprehensive assessment in which individual building systems, such as electrical, heating, and air conditioning, are assessed by a professional inspector and deficiencies are identified. To compare the relative condition of facilities, assessors often use a “facility condition index” (FCI), which is computed as the cost of repairing or replacing parts of the facility that are identified as deficient divided by the cost of replacing the entire facility. FCIs are useful in comparing the relative condition of facilities only if they are calculated using a consistent methodology. A lower FCI indicates a facility in better condition. In some cases, assessments of school facilities also include estimates of the costs for projects that do not specifically address a facility deficiency. These may include projects for bringing facilities into compliance with current building codes that the school was not required to meet when built; providing additional space in schools that are overcrowded; or providing equipment to meet the school’s needs, such as a science lab facility. Limited independent information is available about the physical condition of public school facilities that receive Impact Aid funding for students living on Indian lands. However, three states—Montana, New Mexico, and Washington—have collected independent school facility assessments for some or all of their Indian Impact Aid school districts. Assessment data from these states indicate that the condition of Indian Impact Aid school facilities varies within states and ranges from good to poor. School district officials with whom we spoke attributed the condition of their school facilities to a number of factors, including age and remote location. We did not find independent nationwide data about the condition of school facilities in Indian Impact Aid school districts. Education and its research entity have collected some information regarding the physical condition of school facilities, but none of this information was based on independent assessments of school facilities and none covered all Indian Impact Aid school districts. According to federal officials with whom we spoke: Education collects information on the condition of Indian Impact Aid schools from surveys it receives from school districts that are awarded construction formula grants. School districts that received construction payments in the prior year are required to complete a brief survey as part of the Impact Aid application in which they rank the overall condition of their school facilities on a scale of 1 (excellent) to 6 (replace). From its 2008 application, Education collected surveys from 181 school districts, of which 31 percent indicated their facilities were in good to excellent condition; 54 percent indicated adequate to fair condition; and 15 percent indicated poor condition or in need of replacement. However, Education does not independently verify the responses or use this information in awarding grants, and the number of respondents represents only a small portion of the approximately 1,200 Impact Aid school districts that received Impact Aid basic support funding in 2008. In 2007, Education’s National Center for Education Statistics (NCES) surveyed a nationally representative sample of 1,205 public schools about their school’s condition.13, School principals completing the questionnaire were asked about the quality of their schools, including thei satisfaction with the physical condition of their buildings. Eighty-three percent of the principals were satisfied or very satisfied with the physica l condition of their permanent buildings. However, due to the small sample size, we were not able to obtain statistically meaningful responses for Indian Impact Aid schools. In addition, NCES did not independently the survey responses that were provided by school p rincipals. Among states with large numbers of Indian Impact Aid school districts (at least 15 districts), only Montana, New Mexico, and Washington had independent information about the condition of school facilities in some or all Indian Impact Aid school districts. These 3 states represented approximately 27 percent of all students living on Indian lands. The other states with large numbers of Indian Impact Aid school districts (8 of 11) had no independent information about the physical condition of the school facilities in their school districts (see table 1.) For example, Alaska requires districts to assess their own facilities and submit condition assessment reports to apply for state maintenance and construction grants. However, the data Alaska collects about school condition are not independently verified by the state. Arizona began independently assessing school facilities in 2004 as part of its public school assessment program to ensure that schools meet state minimum condition standards. Arizona has collected information on variables related to facilities, including the number, type, and size of buildings and whether the school site, equipment, and building systems meet the state’s adequacy standards. While these data can be used to identify deficiencies, they do not provide an overall assessment of whether the school facilities are in good, fair, or poor condition. NCES is the primary federal entity for collecting, analyzing, and reporting data related to education in the United States. The facility assessment programs in Montana, New Mexico, and Washington are unique in terms of their purpose, frequency of assessment, number of districts assessed, and data collected. In 2005, Montana’s legislature authorized the appropriation of funds for a one-time condition and needs assessment for all K-12 public schools. This occurred in 2008 when Montana assessed school facilities in its 422 public school districts using a facility condition assessment approach that involved inspecting various school building components, identifying the observable deficiencies, and estimating the costs to repair the deficiencies and replace the entire facility. Montana inspected 11 building systems for each facility, including the HVAC system (heating, ventilation, and air conditioning); electrical system; plumbing system; foundations; exterior sidings; floor systems; roof systems; interior finishes (walls, floors, and windows); special fixtures (cabinets, chalkboards, and fixed seating); conveying systems (elevators); and fire and building code systems (fire detection and suppression, and building accessibility). Montana’s inspections resulted in an FCI value for each school district based on assessments of all of the facilities in the school district. Montana’s FCI used a scale of 0 to 100 percent and the higher the percentage, the closer the cost of the repairs were to the cost of a new facility. Montana considers school facilities with FCIs from 0 to 9 percent to be in good condition, FCIs from 10 to 19 percent to be in fair condition, and FCIs of 20 percent and greater to be in poor condition. Facilities with FCIs greater than 50 percent are considered to be experiencing such levels of fatigue that the merits of reinvestment in the existing structure should be carefully considered. New Mexico created a facility assessment program that required it to evaluate the capital needs of every school facility in the state, rank all 789 public schools in terms of needed capital improvements, and prioritize funding on an annual basis for those public school facilities most in need of repair. This program enables it to optimize the allocation of limited resources. In 2003, New Mexico assessed all K-12 public school facilities and developed the New Mexico condition index (NMCI) that measures both the physical condition and the adequacy of a school facility against New Mexico’s adequacy standards. Facility assessments include evaluations of eight building systems, including site utilities; structural systems (foundations, exterior walls, doors, and roof); interior systems (walls, ceilings, and floors); mechanical and plumbing systems, electrical systems; building and fire code systems (accessibility and fire detection suppression); equipment (gym equipment and technology); and special fixtures (cabinets and chalk boards). The NMCI incorporates weighting factors for specific deficiencies, such as conditions that present health or safety threats, inadequate space, and inadequate equipment. In addition, New Mexico’s assessment process includes a life-cycle analysis that takes into consideration whether a building system is within or beyond its recommended life. New Mexico updates the facility condition data when it completes new assessments of facilities, receives new data from school construction applications, or receives information from the life-cycle analysis. Each year, New Mexico uses the NMCI to rank the schools from the highest score (indicating those most in need of repair or replacement) to the lowest score and typically provides funding for the 100 schools most in need of capital improvement. Washington collects building condition evaluations from school districts that apply for a study and survey grant. This state program provides school districts with funds to complete a long-range planning document, which is a prerequisite for state school construction assistance and includes an independent evaluation of school facilities. Washington provided the evaluation information to us for the 118 school districts that have submitted building evaluations since 2003, including 9 evaluations from Indian Impact Aid school districts and 109 from other school districts, from a total of 295 school districts statewide. School districts may apply for a study and survey grant once every 6 years. As a part of the process to complete the building condition evaluation form, the building inspector scores the condition of various components of a building’s exterior system (foundation, wall, and roof); interior system (floor, wall, and ceiling); mechanical system (electrical, plumbing, and HVAC); and safety and building code system (fire alarm and detection, and emergency lighting). Each building component is awarded points based on its assessed condition. For example, if the inspector determines the exterior walls of the facility to be in good condition, a total of 8 points can be awarded compared with a total of 2 points that can be awarded if the exterior doors and windows are determined to be in good condition. The component scores are summed to create the buildings’ evaluation score, which can range from 0 to 100 points. The building evaluation scores can provide relative information about the condition of different facilities, but they differ from FCI calculations because they do not include an estimate of the repair and replacement costs. According to state officials, the building evaluation scores are used in the process for prioritizing school districts for funding. The scores are not used to categorize school districts in terms of the condition of their facilities. However, the evaluations of several school districts in Washington conducted by one consultant included a scoring table that associated different building scores to different levels of condition. Based on this table, a score of 90 to 100 indicates good condition, a score of 60 to 89 indicates fair condition, a score of 30 to 59 indicates poor condition, and a score of 0 to 29 indicates unsatisfactory condition. Montana, New Mexico, and Washington each measure facility condition differently, and, as a result, we are not able to make comparisons about school condition among the states. For example, Montana calculated FCIs on the basis of the condition of 11 building systems, while New Mexico calculated FCIs on the basis of 8 building systems. Washington’s school facility evaluations use a 0 to 100 point scale, rather than an FCI calculation. Since each state applied the same method for all schools within the state, we are able to compare districts within states. Montana’s assessment data showed that most of its Indian Impact Aid school districts’ facilities were in good condition, although a larger proportion of other school districts—that is, those that do not receive Impact Aid for students residing on Indian lands—had facilities in good condition. (See fig. 1.) Montana’s data indicated that most of the school facilities’ building systems were in good condition. For example, 75 to 100 percent of the Indian Impact Aid school districts had roof systems, HVAC systems, plumbing systems, building foundations, and floor systems that were in good condition. The data were similar for the other school districts. On the other hand, the assessment data indicated that about one-half of the Indian Impact Aid and other school districts had fire and building code systems and about one-quarter had electrical systems that were in poor condition. The biggest difference between the Indian Impact Aid and other school districts was the condition of their interior finishes, with respective rates of 50 percent and 78 percent that were in good condition, 30 percent and 13 percent that were in fair condition, and 20 percent and 9 percent that were in poor condition. New Mexico uses its facility assessment information and the NMCI to rank its schools relative to their capital needs and does not define specific NMCI levels that would correlate to schools being considered in good, fair, or poor condition. According to a New Mexico official, excluding the equipment and special fixtures systems and the weighting factors from New Mexico’s assessment data would result in a more traditional FCI. After making these adjustments, the analysis of New Mexico’s data indicated that all of the Indian Impact Aid school districts had facilities that were in either good or fair condition. The data were similar for New Mexico’s other school districts with 84 percent having facilities that were in good or fair condition. None of the Indian Impact Aid and less than a fifth of the other school districts had facilities that were in poor condition. acilities that were in poor condition. (See fig. 2.) (See fig. 2.) Good (9 districts) Good (27 districts) Fair (10 districts) Fair (32 districts) According to New Mexico’s data, most Indian Impact Aid and other school districts had building systems that were in good to fair condition. The school districts’ structural systems were in the best shape overall— 95 percent of the Indian Impact Aid and about 87 percent of the other school districts had structural systems that were in good condition. New Mexico’s data showed that at least one-half of the Indian Impact Aid school districts had electrical systems that were in good condition, while at least one-half of both types of school districts had building and fire code systems that were in good condition. Although about one-half of the Indian Impact Aid and other school districts had site utility systems that were in good condition, this was also the building category with the highest proportion of districts that were in the poor condition category. For the remaining two building systems, New Mexico’s data indicated that about one-quarter of the Indian Impact Aid and other school districts had mechanical and plumbing systems that were in good condition and one- third of the Indian Impact Aid and one-quarter of the other school districts had interior systems that were in good condition. Washington’s data were based on evaluations from 118 of 295 school districts, including 9 of 29 Indian Impact Aid school districts and 109 of 266 other school districts. As we have previously discussed, Washington does not categorize school districts in terms of their condition, but one consultant has associated the building scores with different levels of condition. For our analysis, we used this consultant’s scoring table to categorize the school districts’ facilities as being in good, fair, or poor condition. Based on this scoring table, the state’s data showed that 4 Indian Impact Aid school districts were in fair condition and 5 were in poor condition. The data indicated that none of the Indian Impact Aid districts were in good condition. The data showed that 2 percent (2) of the other 109 school districts were in good condition, 55 percent (60) were in fair condition, and 43 percent (47) were in poor condition. Washington’s data indicated that none of the 9 Indian Impact Aid school districts and about 14 percent of the other school districts had building systems in good condition. Washington’s data showed 5 to 7 of the 9 Indian Impact Aid school districts had exterior building systems, interior building systems, and safety and building code systems that were in fair condition and 6 districts had mechanical systems that were in poor condition. The data were less clear-cut for the 109 other school districts, although they showed that almost two-thirds (67) of these districts had mechanical systems that were in poor condition and almost three-fourths (81) had exterior systems that were in fair condition. While localities often rely on issuing bonds to raise funds for school renovations and new construction, the officials at most of the school districts we visited commented that their restricted tax base impacts their ability to issue bonds. Officials in one New Mexico school district said that they were able to secure a limited level of bonding on the basis of expected Impact Aid funds. Most officials said that they are unable to issue bonds because so few property owners pay taxes, which is a source of revenue to repay the bonds. Some officials said they accumulate funds over time for a reserve to pay for emergency repairs and larger maintenance and major capital improvement projects. These officials said that Impact Aid is critical to their ability to accumulate such funds. According to officials in one Arizona school district, Impact Aid funds made it possible for the district to accumulate several million dollars that it plans to spend in 2010 on building improvements (e.g., upgrading windows) and digging a water well. At one school district in Montana, officials said that they maintain an emergency fund because without such a reserve, a major problem with a facility could cause a school to be closed. Additionally, several school district officials in Arizona and New Mexico said that they often need to replace roofs, but generally have to partially repair or patch them until sufficient funds are accumulated for a replacement. District officials told us that older schools, like any older buildings, are often expensive to maintain because they are less efficient and other problems are more likely to surface once a repair is started. At both school districts we visited in Montana, officials said that the districts’ schools are quite old, with sections in one district dating back to 1919 and the other dating back to 1930. School district officials said some buildings are still heated by boilers originally installed in the 1940s. Officials from one of the Montana school districts told us that they replaced the boiler at their high school 2 years ago after accumulating the funds necessary for the project over several years. This year, officials expect to replace the elementary school boiler—originally installed in 1942 (see fig. 3). According to district officials, the older boilers are inefficient and make it difficult to maintain a comfortable building temperature. Several school district officials in Arizona, Montana, and Washington also said that their older buildings have single pane windows, which make it difficult to maintain an adequate classroom temperature compared with more efficient double pane windows. Officials also said that the older buildings generally do not meet and are not required to meet the current building codes, and attempts to retrofit buildings to make them more accessible are often difficult and expensive. A school’s remote location was also cited as a contributing factor to facility conditions. Several of the school districts we visited were located in remote areas, and one district spanned about 3,000 square miles. School district officials in New Mexico and Arizona said that because of their remote locations, quality services may be difficult to obtain and may cost more. School officials in these states said higher costs are often due to a lack of commercial builders in rural areas. For example, at one remote school district we visited in New Mexico, officials said the area lacks maintenance services for HVAC and quality roofing contractors. Officials said the HVAC system needs constant repairs, and repair services take longer and cost more when contractors must travel from urban to rural areas. According to officials from one New Mexico district, to minimize the number of trips and effectively respond to building repairs among schools that span 60 miles, maintenance personnel are required to check the online maintenance system at the school for any work orders that can be completed while maintenance personnel are on location. State officials in New Mexico are also trying to understand whether relative remoteness was a factor in building two different schools for about 100 students that cost $3.5 million in one remote area of the state and $8 million in another remote area. The state has appointed a task force to address concerns that some remote school districts are not receiving the same quality of services as others from electricians, carpenters, and other contractors. The research studies we reviewed on the relationship between the condition of school facilities and student outcomes often showed that better facilities were associated with better student outcomes; however, there is not necessarily a direct causal relationship, and the associations were often weak compared with their associations with other factors. Also, some researchers suggest that specific characteristics of facilities, such as lighting, may be directly associated with student outcomes. Other characteristics of facilities, such as the general condition of the buildings, may be indirectly associated with student outcomes through their effects on other factors. We identified and reviewed 24 studies that analyzed the relationship between facility conditions and student outcomes. A majority of these studies indicated that better school facilities were associated with better student outcomes—such as higher scores on achievement tests or higher student attendance rates. Most of the studies measured the extent to which better school facilities were associated with better outcomes after taking into account the impact of other factors that can affect student outcomes, such as poverty and other demographic characteristics. However, none of these studies proves that better facilities caused better student outcomes. About one-half of the studies we reviewed examined broad measures, such as the general condition of the school buildings based on evaluations by facilities specialists or by teachers, or the suitability of school buildings—the extent to which district officials rated the facilities as being suitable for the grades being served. Based on these studies, it is unclear to what extent better facility conditions contribute to better student outcomes, or whether the associations identified may exist because other factors, such as the level of community commitment to education, contribute to both better facilities and better student outcomes, and none proved a causal relationship. The other studies focused on specific aspects of facilities, such as heating, air conditioning, ventilation, or lighting. None of the studies we examined was able to conclusively determine how much school facility conditions contribute to student outcomes relative to other factors, such as the educational achievement of students’ parents or teachers’ qualifications. Of the studies that focused on broad measures, such as measures of physical conditions or the suitability of school facilities, about one-half (7 of 13) found that schools with better facilities generally had better student outcomes. These included cases in which researchers noted possible direct connections between better facilities and student outcomes and cases in which they noted indirect connections, with better facilities contributing to conditions that in turn contribute to better student outcomes. Some studies indicated associations between facilities and student outcomes with some but not all measures of student outcomes. One of the studies examining all elementary and secondary schools in the District of Columbia estimated that students attending schools in fair condition had average achievement test scores 5.45 points higher on a 0 to 100 point scale than those attending schools in poor condition. This was the case after taking into account other factors that may have an influence on student achievement, such as race and income. Similarly, a study in the Los Angeles Unified School District found that in schools with facilities that met health and safety compliance requirements, the schools’ average student California Academic Performance Index scores were likely to be higher. Compared with schools in the lowest compliance category, schools in the highest compliance category had an estimated average score that was 36 points higher on the composite index, with scores ranging from 200 to 1,000. This was the result after taking into account factors, such as the percentage of students eligible for free or reduced price school lunch and the percentage of students who were black or Hispanic. This study found that although the school facilities that were in better condition were associated with better student achievement, some of the other important factors, such as poverty, were more strongly associated with achievement. For example, holding all else constant, schools with the lowest percentage of students who were eligible for free or reduced price lunch were expected to have average achievement scores 113 points higher on the 200 to 1,000 point scale than schools in which all students were eligible for free or reduced price lunch—more than three times the estimated difference between school facilities in the worst and the best compliance categories. One study used a potentially more rigorous methodology by comparing achievement test scores at schools before and after renovation of 3 of the district’s 21 elementary schools. The study showed that math, but not reading test scores, improved as the proportion of students in recently renovated schools increased. The researcher concluded that a larger sample would be needed to provide better evidence of a connection between school facilities and student achievement. Another study found no association between better school facilities in Wyoming and student achievement. The study found that before and after taking into account the income status of students’ families, there was no statistically significant association between schools in better condition and schools with higher average achievement. Similarly, no statistically significant association was found between student achievement and the suitability of the school facilities. School district officials at all of the eight Indian Impact Aid school districts we visited said that in their experience, better school facilities are associated with better student outcomes, though they also often cited other factors that some believed had more influence, such as whether students’ families placed a high value on education. Several district officials noted that many of their students are from low-income families that may not place an emphasis on education. Although officials in several districts we visited said their students are affected by the condition of school facilities just as other students are affected, other officials remarked that their students, who often come from homes in poor condition, may be especially affected by a school’s good condition because it provides a more comfortable environment. Some studies indicate that better facilities can contribute to student outcomes indirectly—through their effects on other factors—and school officials with whom we spoke believed this was true in their districts. For example, a study of Virginia middle schools indicated that although better student achievement was associated with the quality of school facilities, better student achievement was more highly associated with a variable identified as “school climate,” which measures attitudes in the school community that support learning, such as students’ respect for others who get good grades and teachers’ commitment to helping students. The authors concluded that rather than having a direct effect on student achievement, better school facilities can indirectly influence student achievement by contributing to a good school climate for learning. School officials we interviewed noted that good facilities contribute to students’ pride in their school. One official noted that good school facilities send a message to students that the community values education, which can result in better student outcomes. Similarly, a study of New York City elementary schools found that better school building conditions were associated with better student attendance rates, and that these in turn were associated with better English and math achievement. Several school officials also noted the importance of good school facilities for attracting and retaining good teachers who in turn can improve student achievement. Research points to teacher quality as an important school- level factor that influences student learning. The association between good school facilities and teacher retention was the focus of one study that identified several factors associated with teachers’ plans to remain another year in their current school, including better school facility conditions. This study found an association between the school facility and teacher retention even after taking into account several other factors, including the teachers’ ages, their tenure at the school, and their satisfaction with pay and the community. Studies we reviewed that focused on the effect of specific characteristics of the school facility found that some factors, such as lighting, are directly associated with better outcomes. Rather than simply examining whether students have enough light to be able to see classroom materials, some studies have examined the extent to which classrooms provide daylight or light that simulates daylight. For example, a study of 24 elementary schools in Georgia found that third-grade students in classrooms with more daylight had higher average achievement test scores after taking into account the free or reduced price lunch variable and other aspects of the school facility design. Including daylight in the analysis explained an additional 2.5 percent of the variation in average test scores among the schools. Similarly, a study of 102 schools in California, Colorado, and Washington found that students in the classrooms with the most daylight increased their test scores overall about 21 percent more than those students in rooms with the least amount of daylight after taking into account additional information, including teacher characteristics and grade levels. A follow-up study taking into account additional information, including teacher characteristics and grade levels confirmed these findings, showing that students in the classrooms with the most daylight increased their test scores overall about 21 percent. Another study found that classrooms with full-spectrum fluorescent light bulbs, which simulate daylight, were associated with faster academic progress compared with classrooms using high-pressure sodium vapor bulbs, which do not simulate daylight as well. Average test scores in classrooms with full- spectrum bulbs indicated that students increased their level of academic achievement by about 2 grade levels over the 2-year study period, compared with 1.6 years for students in classrooms with the high-pressure sodium vapor bulbs. Few of the school administrators with whom we spoke cited lighting as a factor related to student outcomes, although we found that the extent to which students were exposed to natural light varied in the schools we visited. While many schools had classrooms with windows that let in light, the level of natural light varied considerably. One school had installed dividing walls to create smaller classrooms out of large spaces, and some of the resulting classrooms had no natural light. In at least one school we visited in Washington, renovations included upgrading lighting to provide full-spectrum light and reduce energy use. Studies examining the quality of air in classrooms found associations between better air quality and better health or lower absenteeism. A study of schools in Finland found that in an elementary school with moisture or mold problems, there was a higher occurrence of respiratory infections, repeated wheezing and prolonged coughing, and emergency room visits than in other schools. Another study of schools in Finland had similar results and showed that although background concentrations of fungi in wooden buildings were significantly higher than in concrete or brick buildings, moisture damage increased fungal concentrations significantly in the concrete or brick buildings, but not in wooden school buildings. Moisture damage increased the likelihood that students would have respiratory symptoms in schools constructed of concrete or bricks. Another Swedish study found that two day-care centers that installed electrostatic air cleaning systems reduced the concentrations of fine particles in the air, and absenteeism fell by 55 percent at the larger center and by a smaller proportion at the smaller center. Absenteeism almost returned to the original level after the system at the larger center was turned off. Another study found that new ventilation systems in Swedish schools reduced the prevalence of asthmatic symptoms in classrooms compared with those without the new systems. Studies in Danish elementary school classrooms found that ventilation systems that drew in larger volumes of outdoor air were associated on average with an 8 percent increase in the speed at which students worked. Air quality was a concern in two of the districts we visited, such as at a middle school we visited in Washington where the main hallway had no ventilation or air circulation and the stale air had a noticeable odor. School administrators cited the poor air quality as a concern they felt was a high priority to address. Another school in the same district faced complaints about air quality, and administrators speculated that the air quality was adversely affected by old carpeting. One study considered the effects of temperature control in elementary schools in Denmark and found an association between comfortable temperatures and student performance. The study found that reducing classroom temperatures from 77 degrees Fahrenheit was associated with improved speed in math and language tests. The study indicated that a 1.8 degree Fahrenheit drop in temperature was associated with about a 4 percent increase in the speed at which students worked. The number of errors students made decreased when performing some tasks, but not others. School officials in several districts we visited cited difficulties in maintaining comfortable temperatures in classrooms and concurred that when students are too cold or too warm, it is difficult for them to concentrate on their studies. We provided a draft of the report to the Department of Education for review and comment. We received technical clarifications from Education’s Impact Aid Program within the Office of Elementary and Secondary Education, which we incorporated in the report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Cornelia Ashby on (202) 512-7215 or ashbyc@gao.gov; or Terrell Dorn on (202) 512-6923 or dornt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To determine what information is available about the physical condition of school facilities in Indian Impact Aid school districts and what is known about how the condition of school facilities affects student outcomes, we interviewed officials from state and federal agencies, and associations and reviewed relevant federal laws and regulations. This included interviews with officials from the Department of Education’s National Center for Education Statistics (NCES); state education agencies; school districts; and education associations, including the National Indian Impacted Schools Association, the National Association of Federally Impacted Schools, National Council for Impacted Schools, National Indian Education Association, as well as state Indian education officials in Washington and Montana. We conducted a literature search to identify research studies and analyzed selected studies. We also visited school districts in four states—Arizona, Montana, New Mexico, and Washington. To determine what information is available about the physical condition of school facilities in Indian Impact Aid school districts, we contacted officials from Education’s Impact Aid Office, NCES, and Indian Impact Aid associations for independent national data on school condition. We decided to accept only assessment data that were prepared by an independent party with no apparent vested interest in the results of the assessment. We determined that Education collects surveys about school condition from school districts that received an Impact Aid construction formula grant, but we determined that the survey data were of limited use because they were not based on independent assessments and did not cover all Indian Impact Aid schools. We determined that although NCES published the results of its study of a nationally representative sample of school districts in which it asked school principals about the condition of their schools, we could not use these data because we are not able to obtain statistically meaningful responses for Indian Impact Aid schools due to sample size, and NCES did not independently verify the survey responses that were provided by school principals. We found that national associations like the National Indian Impacted Schools Association and the National Council for Impacted Schools do not document the condition of school facilities in Indian Impact Aid school districts. Because we could not identify a source for nationwide data, we sought state-level data. Education provided us with the list of states with school districts that received fiscal year 2008 Impact Aid funds for students living on Indian lands. From this list of 27 states, we identified 11 states with a large number of Indian Impact Aid districts (at least 15 districts) and contacted their state education officials to determine whether they had independent assessment data about the physical condition of public school facilities. We determined that four states—Arizona, Montana, New Mexico, and Washington—had assessment data for some or all of their public schools. We obtained and analyzed these data from the four states, which did not maintain the data in similar fashions. Montana and its contractor provided us with a copy of its complete school building and system-level analyses of repair and replacement costs, which we used to generate our school district-level analysis. New Mexico provided us with school district-level data of building system repair and replacement costs. Arizona collected only deficiency information at the school building-level, which we used to create our district-level information for site selection. Washington maintained hard copies of the building-level evaluation reports, which we keypunched to create raw data for district-level files. On the basis of our analysis, we were able to describe the condition of schools in Indian Impact Aid districts in three of the four states. We determined that these data were sufficiently reliable for the analysis used in this report. We were not able to use Arizona’s data because, although it describes a variety of information, including the number, type, and size of buildings and whether the school site and building systems meet the state’s adequacy standards, the data do not determine whether the school facilities are in good, fair, or poor condition. For the other three states, we combined the facilities data with Education’s Common Core of Data to describe the characteristics of the school districts, which we used for selecting school districts for site visits. Because each state’s assessment program is unique, it does not allow for comparisons among states. For example, while both Montana and New Mexico create a facility condition index that is based on the ratio of renewal cost to replacement cost, New Mexico weights deficiencies in a manner consistent with its own state priorities, (e.g., classroom space); whereas, Montana does not rely on any explicit weighting scheme. In addition, each state bundled its building system groups differently, consistent with state priorities with the respective indexes for each bundle being incorporated into the calculation of the overall facility condition index. In contrast, the assessment program in Washington does not calculate a facility condition index. Only districts seeking funds for planning grants or construction participate in the Washington assessment program, unlike in Montana and New Mexico where all school districts were assessed. Because of these differences, facility condition measures are not strictly comparable across states. While comparison among states would not be valid to evaluate the condition of schools in Indian Impact Aid districts, the condition of school facilities can safely be compared within each state. This comparison allows for an assessment of the quality of school condition in Indian Impact Aid districts relative to that of other districts in the same state. In Washington, only districts applying for a study and survey grant submit documentation of the condition of their school facilities. The districts that do participate in the study and survey grant program are required to provide matching funds, which in turn may indicate the ability to obtain school board or community approval to levy a bond. Of 29 Indian Impact Aid school districts, 9 have submitted building evaluation reports since 2003. Similarly, 109 of 266 other school districts statewide have completed and submitted an evaluation report for their district. Because less than one-half of the districts submitted evaluation data and the districts that did are self-selected, it is not known whether the assessed districts differ systematically from the nonassessed group. In addition, whether and how systematic differences between the assessed and nonassessed groups occur could be different for Indian Impact Aid districts and other districts in Washington. Differences in facility condition between Indian Impact Aid districts and other districts in Washington could be attributable to these underlying selection-related differences and not to any real differences between the two populations of school districts in Washington. We selected two school districts in each of the four states to visit to obtain district officials’ perspectives on factors that affect facility maintenance and to observe their facilities. We selected districts that provided variety on the basis of selection criteria, such as information about the relative condition of the school districts’ facilities, the proportion of the school district’s revenue composed of Impact Aid, proportion of students who are Indians, and number of students enrolled. (See table 2.) To determine what is known about how school facilities affect student outcomes, we conducted a search for research studies that addressed this topic. We identified studies dating back to 1980 and selected those that were either from peer-reviewed journal articles or were methodologically rigorous studies from (or sponsored by) other sources, such as government institutions. Two GAO staffers, one analyst from the audit team and one methodologist from the research group, systematically reviewed each of the studies selected, evaluating the design, measurement strategies, and methodological integrity and entering this information into a database. From more than 100 studies that we initially selected, 24 were selected to be included in our review. We excluded studies because, for example, they did not provide sufficient detail on the analytical approach or failed to control for other plausible explanations for differences. The selected studies were sufficiently rigorous and included tests of hypotheses; measures of association; and multivariate techniques, such as ordinary least squares regression (see table 3). In addition to these 24 studies, we reviewed 4 additional studies that focused on the relationship between facility condition and teacher outcomes rather than student outcomes. The selected studies were sufficiently rigorous and included tests of hypotheses; measures of association; and multivariate techniques, such as ordinary least squares regression. Each of these studies is subject to certain methodological limitations, which limit the extent to which the results can be generalized to school facilities in general or to school facilities in Indian Impact Aid districts. Many of the studies focus on comparisons of schools without information about the outcomes in schools before and after changes in school facilities. This makes it difficult to isolate the effects of improvements in school facilities. Some studies used small samples or had low response rates to surveys or had missing data for many schools in the original sample. Several studies focused on schools in other countries and the extent to which their results are applicable to schools in the United States is uncertain. In at least one case, the research was funded in part by a group—such as a building association—that may have had an interest in the results. We conducted our work from September 2008 to October 2009 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings in this product. Table 4 contains a list of the 25 states with public school districts that had received Indian Impact Aid for fiscal year 2009, as of August 2009. We use the term Indian Impact Aid to refer to school districts that qualify to receive Impact Aid basic support funding because they meet the minimum eligibility criteria, namely they have at least 400 students in average daily attendance who are federally connected, in this case who reside on Indian lands, or such students comprise at least 3 percent of the total number of students in the district. The table also lists for each district the total number of students living on Indian lands in average daily attendance for the previous school year, this number as a percentage of the total number of students in average daily attendance, and the amount of Impact Aid basic support payments each district received for students residing on Indian lands under section 8003(b) of the Elementary and Secondary Education Act of 1965, as amended. These amounts do not include basic support payments for other students with connections to other federal lands, children with disabilities, or construction grants under section 8007. Table 5 provides summary information about selected studies on broad measures of school facilities and student achievement. Table 6 provides summary information concerning other studies on school facilities and student outcomes—including those on specific school facility characteristics and various student outcomes, including achievement, attendance, and behavior and health. In addition to the contacts named above, Kathryn A. Larin and Maria D. Edelstein, Assistant Directors; Pamela R. Davidson; Gail F. Marnik; John W. Mingus, Jr.; Benjamin P. Pfeiffer; James M. Rebbe; Kimberly M. Siegal; Larry S. Thomas; Kathleen L. van Gelder; and Walter K. Vance made key contributions to this report.
State and local governments spend billions of dollars annually on the construction, renovation, and maintenance of public school facilities, yet concerns persist about the condition of some school facilities, particularly in school districts serving students residing on Indian lands. The Department of Education's (Education) Impact Aid Program provides funding to school districts that are adversely impacted by a lack of local revenue because of the presence of federal land, which is exempt from local property taxes. Impact Aid can be used for school expenses, such as facilities and teacher salaries. In response to concern about school facility conditions and concern that these conditions can affect student outcomes, GAO was asked to describe (1) the physical condition of schools in districts receiving Impact Aid because of students residing on Indian lands and (2) what is known about how school facilities affect student outcomes. GAO interviewed federal, state, and local officials; analyzed available independent school facility assessment data for three states; visited eight school districts that receive Impact Aid; and analyzed studies examining the relationship between school facilities and student outcomes. GAO is not making recommendations in this report. Education provided technical clarifications, which GAO incorporated as appropriate. Limited nationwide data are available about the physical condition of public school facilities in school districts that receive Impact Aid funding for students living on Indian lands, although data from three states indicate the conditions range from good to poor. Montana's assessment data showed that the majority (39 of 60) of Indian Impact Aid school districts had facilities in good condition. New Mexico's data showed that all 19 Indian Impact Aid school districts had facilities in either good or fair condition. Washington's data--based on assessments from 9 of 29 Indian Impact Aid school districts--indicated about half (4 of 9) of the Indian Impact Aid school districts had facilities in fair condition and about half (5 of 9) had facilities in poor condition. Facility assessments are not comparable across states. School district officials from 8 districts told GAO their facility conditions are affected by factors such as fiscal capacity, the age of buildings, and remote locations. The research studies GAO reviewed on the relationship between the condition of school facilities and student outcomes often indicated that better facilities were associated with better student outcomes, but there is not necessarily a direct causal relationship and the associations were often weak compared with those of other factors, such as the prevalence of poverty or other student characteristics. A majority of the studies GAO reviewed indicated that better school facilities were associated with better student outcomes--such as higher scores on achievement tests or higher student attendance rates. Most of the studies measured the extent to which better school facilities were associated with better outcomes, after taking into account the impact of other factors. None of the studies examined was able to conclusively determine how much school facility conditions contribute to student outcomes relative to other factors, such as student demographics, and none proved a causal relationship between school facilities and student outcomes.
Vocational education prepares students for an increasingly demanding labor market through an organized sequence of courses that are directly related to preparing students for employment in jobs that do not require a bachelor’s degree. For example, one school district offers high school students the opportunity to acquire the technical skills needed for careers in fields like automobile repair, medical assisting, or electronics. Vocational education programs are funded at the federal, state, and local levels. Funding provided under the 1984 Perkins Act is the federal government’s primary form of assistance for vocational education. Although federal financing accounts for only a small percentage of expenditures on vocational education, the Perkins Act provided about $1.4 billion in 1993-94, compared with approximately $1 billion in 1990-91. In addition to eliminating the set-aside requirement for special populations, the Perkins Act amendments included several provisions intended to improve the quality of vocational education. To help ensure that programs are of sufficient size and scope to be effective, the amendments set minimum funding thresholds at the secondary school level. School districts that would have received funding allocations of less than $15,000 under the original Perkins Act are now generally ineligible for funds unless they join other districts in a consortium in which the total funding meets the $15,000 minimum. The amendments also encourage several approaches to vocational education that smooth the transition from school to work. In 1993-94, Perkins funding included $104 million for tech-prep programs, which link secondary vocational education programs to postsecondary institutions in a coordinated program leading to an associate’s degree or certificate. For example, one school district operates a tech-prep program in allied health services that prepares students for a career as a Medical Assistant, Emergency Medical Technician, or Surgical Technologist. The Perkins amendments also encourage schools and districts to integrate vocational and academic instruction, so that vocational students can develop a better appreciation of how academic learning is related to job requirements. In addition, the amendments require recipients (schools and districts) to evaluate the effectiveness of their vocational education programs and in particular to evaluate the progress of special population students. For example, placement data on high school graduates can indicate whether students have continued their education or obtained employment in their field. Despite widespread concern, removal of the set-aside requirement has apparently had no adverse impact on special population students. Specifically, neither student participation nor the availability of support services has declined following the implementation of the Perkins amendments. Furthermore, employment and educational outcomes for special population students—relative to vocational education students as a whole—were unchanged. We found no significant changes in the rate at which special population students participated in vocational education. In 1993-94, 42 percent of all students participated in vocational education, compared with 45 percent in 1990-91. This decline in overall participation was reflected in small, statistically insignificant declines in participation among students with disabilities (from 48 to 47 percent) and among students who were disadvantaged (from 53 to 50 percent). (See fig. 1.) Not only did students from special populations continue to participate in vocational education, but these students could be found in the full range of vocational education activities, including school-to-work transition activities. Since the implementation of the amendments, more schools have offered tech-prep programs; schools have also continued to offer work-study and apprenticeship opportunities. When comparing students from special populations with other students, we observed no significant differences in participation in these activities either before or after the amendments. For example, in 1993-94, 16.8 percent of disadvantaged students—and 16 percent of students who did not belong to special population groups—participated in tech-prep. However, because many schools were unable to provide this information, our estimates of participation in these activities are somewhat imprecise. (For more information about participation in vocational education programs, see app. II.) From 1990-91 to 1993-94, the percentage of schools that offered support services to students, including those from special population groups, generally increased. For example, the percentage of schools that offered transportation services to students with disabilities increased dramatically (from 59 to 74 percent). These students’ access to teacher aides, tutoring, and life skills training also rose significantly. For students not in special population groups, there was a significant increase in the percentage of schools offering tutoring (from 52 to 66 percent). In some support areas, special population students were more likely to be offered additional services than students who did not belong to these groups (see fig. 2). For example, in 1993-94 students from any of the three special population groups were significantly more likely to be offered teacher aides than students who did not come from any of these groups. However, for many of the remaining support services, the differences between the various groups of students were small and statistically insignificant. Across all student groups, in 1993-94 schools were most likely to offer counseling or guidance, tutoring, evaluation or assessment, life skills training, and special recruitment; over two-thirds of schools offered these services. Day care was offered less frequently (by less than one-sixth of schools). (For more detailed information on the percentage of schools offering support services, see app. II.) Historically, vocational education graduates who have disabilities or are economically disadvantaged have been less likely to attend college and more likely to go directly to work than other students. This pattern is evident in both our 1990-91 and 1993-94 surveys. In general, these differences neither widened nor narrowed over time. For example, the proportion of disadvantaged vocational students who expected to attend a 4-year college was 14 percent in 1990-91 and 13 percent in 1993-94—a statistically insignificant change. However, many schools were unable to provide placement information, and this low response rate limited our ability to observe changes in postgraduation status. (For more information about changes in outcomes for vocational students who are members of special populations, see app. II.) The Perkins amendments directed recipients to adopt a number of strategies to enhance the quality of vocational education—most specifically, tech-prep programs, integrated learning approaches, and the development of standards by which schools and districts can better evaluate their vocational programs. The sponsors of the Perkins amendments believed that these approaches would improve the quality of vocational education by easing the transition from school to work and by ensuring that students apply cognitive skills in a vocational education environment. For similar reasons, vocational education experts have advised schools and districts to emphasize school-to-work transition activities. We observed many schools and districts moving aggressively to implement several of these approaches. However, other recommendations (such as using academic teachers in vocational classes) have been slower to gain acceptance. Many of the attributes associated with quality programs still affect only a small percentage of vocational education students. Similarly, although districts have increased their use of quality indicators for self-assessment, many districts have not yet developed standards to guide these assessments. Schools have moved aggressively to increase several of the approaches to vocational education associated with quality—such as integrated learning and tech-prep programs. For example, in 1993-94 35 percent of all schools reported that to a “great” or “very great” extent they were participating in teacher training activities designed to integrate academics into vocational education, compared with 20 percent or less in 1990-91 (see fig. 3). Even more dramatically, the percentage of schools offering tech-prep programs increased significantly in just 2 years: in 1990-91 only 27 percent of schools offered tech-prep, but by 1993-94 that figure had jumped to 45 percent (see fig. 4). For example, when we visited one district in 1990, officials were planning their tech-prep program. In 1991-92, they formed a tech-prep consortium, including 10 school districts. When we visited again in 1993-94, two more districts had joined the consortium and the first tech-prep program was under way. The consortium hopes to have 200 tech-prep students entering affiliated postsecondary institutions by September 1996. Acceptance of the integrated learning and tech-prep concepts has grown substantially. However, many more students will need to be exposed to these approaches before they become a standard part of vocational education. Less than half of the schools we surveyed employed several practices, such as team teaching, that bring integrated learning into the classroom. In one school we visited, informal cooperation among teachers facilitated integration—for example, the teacher of a course in computer-aided design invited the physics teacher into his classroom to explain some of the physics elements in computer-aided design. However, another district we visited was unable to implement the integrated learning concept to the extent that its administrators would have liked. These officials told us that teacher credentialing requirements at the state level prevented vocational teachers from teaching academic subjects, and contracting arrangements limited teachers’ incentives to participate in summer training. Similarly, despite sizable increases in the number of schools and students participating in tech-prep programs, only 16 percent of vocational students in 1993-94 were participating in tech-prep. In addition, other methods for improving the school-to-work transition—such as work study and apprenticeships—have not grown significantly since the Perkins amendments were implemented (see fig. 4). These programs also reach only a small number of students; only 16 percent of vocational students participated in work-study programs in 1993-94, although 74 percent of schools reported that they offer a work-study program. In addition to integrated learning and school-to-work activities, experts in vocational education have urged schools to develop certificates of competency and to require students to meet minimum standards or competencies to complete the program. These initiatives have been slow to develop since the Perkins amendments; both the percentage of schools that reported issuing certificates and the number of programs that required competencies remained roughly constant between 1990-91 and 1993-94. School districts reported an increase in the use of various measures in their self-assessment process. For example, we observed substantial increases in the proportion of school districts that reported using graduation rates (from 72 to 83 percent) and placement rates (from 77 to 86 percent) as part of their self-assessments. The schools we visited, however, reported that it was difficult and time consuming to gather this type of information. For example, one school district attempted to contact recent graduates by mail but received only a 25-percent response rate. In addition, despite this increased use of information for self-assessment, many schools have yet to develop standards to guide these assessments. For example, 71 percent of schools used measures of students’ academic gains as an input into their assessment process. However, only 69 percent of the schools that used this measure had developed standards that would allow them to determine if students’ academic progress was satisfactory. (For more information about school progress in quality, assessment, and standards development, see app. II.) The Department of Education commented on a draft of this report. The Department believed that the draft did not make clear that the Perkins amendments contained a new requirement for local recipients to give priority in the use of title II funds to the special populations. However, the law requires only that recipients give priority to sites or programs that serve higher concentrations of special population students; there is no legislative requirement that special population students as a group be given priority over other students. We revised the report to more strongly emphasize that the amendments required such priority. The Department also believed that it would help to see a comparison of the extent to which the special populations are participating in educational improvements and services compared with the general student population. These comparisons are in table II.4 for programs and in table II.5 for support services. Department officials also made technical comments, which we discussed with them, and we made clarifications to the report as appropriate. The Department’s comments appear in full in appendix V. We did our work between November 1993 and May 1995 in accordance with generally accepted government auditing standards. Please call me on (202) 512-7014 if you or your staffs have any questions. GAO contacts and staff acknowledgments for this report are listed in appendix VI. The Congress mandated that we conduct a 3-year study, using representative samples, to determine the effects of the amendments to the Perkins Act on access to and participation in vocational education for students who are disadvantaged, have disabilities, or have limited proficiency in English. The act specified that Perkins funds were to be used to improve vocational education programs and that the state was to provide assurance that members of special populations would have continued access to these programs. Consequently, we compared the status of special population students and vocational education programs before the amendments with their status after the amendments. Specifically, we measured the extent to which changes have occurred for students, in participation in vocational education, including participation in innovative programs; availability of special services; and college attendance and employment following graduation; for vocational education programs, in schools’ and districts’ use of formal coordination of high school and college courses; integration of academic and vocational learning; and development of competency standards for students. To address these objectives, we used panel data from two surveys administered to a nationally representative, stratified, randomly selected set of schools and their associated districts. The eight strata represent the major groups of secondary schools. After we adjusted the sample to remove inappropriate schools (for example, schools with no grades higher than 9), our sample included 1,938 schools in the first (or baseline) survey, and 1,844 schools in the second (or follow-up) survey. One thousand two hundred thirty-three schools responded to both surveys (for a 67-percent overall response rate). The item response rate varied with each item. The data from the two surveys were pooled—that is, we created a file consisting of those schools that had answered both questionnaires. For our analysis, we made direct comparisons of the reported status (such as the percentage of students who were in vocational education or the number of tech-prep programs) using data only when the school had answered the specific item in both surveys. The findings were then averaged across all schools that had responded to that item. The advantage of this approach is that small changes in the variables of interest are more easily identified than if separate studies were made using two or more independent samples. In addition, by comparing the data for just those schools that responded, we are able to report the average responses without concern that the averages are contaminated by changes in the composition of the respondents. The major disadvantage of the panel approach is that when nonresponse occurs, the data are no longer representative of national averages. The requirement that a school must have answered both surveys gives us a smaller response rate than had we used the mean values from both surveys independently. What we are reporting on are the estimated population means for those schools that would have answered both surveys, and the specific item in each survey, had they been given the chance. As a result, we cannot say that the responses represent all schools in the population from which the samples were drawn. Each observation from the school surveys was weighted (1) to adjust for the probability of being selected in the strata from which the sample was drawn and (2) to account for the pooled response rate from both surveys. Item response varied according to item, but the data were not weighted for item response. Because we used data only when the school responded to an item in both the baseline and follow-up surveys, the number responding may vary for each separate comparison. District data were not weighted, as it was not possible to adequately account for the probability of being selected from a pooled sample. The universe from which the samples were drawn, the sample sizes, and the number responding to the secondary school surveys are reported in table I.1. As part of our analysis of the survey data, we compared schools’ responses for different types of students and over time (see fig. I.1): School Year 1990-91 vs. School Year 1993-94 Special Population Students vs. All Students in the School Special Population Students vs. Nonspecial Population Students Vocational Education Program Students vs. All Students in the School School year comparisons. We compared data from each school for 1990-91 with the same data item in 1993-94. These values were then averaged across schools that responded to the item. School year comparisons were made throughout and directly address whether or not changes have occurred over time. Special population and all students. For some analyses, we compared the mean values for the special population students with the mean values for all students, including the special populations. This comparison permits determination of whether mean values for the special populations differ from those for all students. For example, we compared the percentage of vocational education students in the overall student body with the percentage of vocational education students from among the special populations to get information on the overall participation rate in vocational education. Special population and nonspecial population students. For some analyses, we compared the special populations with students who were not part of the special populations. This comparison permits assessment of whether special population students are participating in services and programs in proportion to their enrollment in vocational education and at levels comparable to the nonspecial population students. Vocational students and all students. For some analyses, it is useful to know how vocational students compare with all students in the school. For example, we used this comparison to determine the general direction of average school attendance. We found that although the average number of vocational students was rising, the average number of all students was rising faster. This puts the increase in vocational students in proper perspective. To supplement the information obtained from our follow-up survey, during 1993-94 we visited four school districts in Oakland, Michigan; San Francisco, California; Delaware County, Pennsylvania; and New Castle County, Delaware. During these visits, we interviewed school and district officials to obtain information on vocational education programs, services to special populations, and assessment and improvement efforts. This appendix contains supplementary tables and more detailed information about changes in student participation, the availability of support services, student placement outcomes, and vocational education programs between 1990-91 and 1993-94. The data presented in the following sections compare changes in student and program characteristics only for those schools that responded to both surveys (that is, for 1990-91 and 1993-94). Thus, the numbers and percentages cited differ somewhat from those in our 1993 interim report, which reported on all schools that responded to our first survey. For the schools we surveyed, the average number of students per high school increased by about 6 percent between 1990-91 and 1993-94 (from 603 to 640 students per school). For the average school, the percentage increases were greatest for students from special population groups; however, the number of these students was often small. The proportion of students who were not part of special population groups remained constant at about 65 percent, while some of the special population groups grew. This may be accounted for, in part, by more students being defined as belonging to special populations. In addition, our definition permitted students to be classified in more than one special population category. (See table II.1.) body) (65.4%) (65.8%) (9.0) (10.1) (30.3) (31.6) (2.5) (3.1) The sum of the percentages in each school year exceeds 100 because students may be included in more than one special population category. Similarly, the sum of the number of students in each population group will exceed the total number of students. The percentages of the student body represent the average percentages reported by the schools responding to both surveys; they are not, for example, the average number of disabled students divided by the average number of total students. Vocational-technical enrollment also increased, but more slowly than overall enrollment. On average, there were 330 vocational students per school in 1990-91, and this number did not increase significantly. Again, the increase for students in special population groups was larger than for other students, but for many schools there were few students in some of these categories. (See table II.2.) body) (59.5%) (58.1%) (10.0) (11.8) (31.9) (35.6) (2.0) (2.7) The sum of the percentages in each school year exceeds 100 because students may be included in more than one special population category. Similarly, the sum of the number of students in each population group exceeds the total number of students. The percentages of the student body represent the average percentages reported by the schools responding to both surveys; they are not, for example, the average number of disabled students divided by the average number of total students. Because average per-school vocational-technical enrollment grew only 1.8 percent over this period (compared with 6.1 percent growth in the overall student population), the percentage of students participating in vocational-technical education declined relative to overall enrollments. Across all groups, except for those with limited English proficiency, a smaller percentage of students participated in vocational education. Although the rate of decline for students in special population groups was less than for other students, the changes were not statistically significant for any group. (See table II.3.) As shown in figure 4 (see p.11), the percentage of schools reporting that they have tech-prep programs increased dramatically (from 27 to 45 percent) between 1990-91 and 1993-94, while the percentage of schools reporting the use of work-study and apprenticeship programs remained about the same (at roughly 75 and 7 percent, respectively). Participation in such programs by students from special population groups over the 3-year period generally mirrored changes (or the lack of change) that occurred at the school level. There were no statistically significant differences in participation among student groups. However, because many schools were unable to report this information, our participation estimates are somewhat less precise. (See table II.4.) In both 1990-91 and 1993-94, schools offered a wide variety of services to their vocational-technical education students. Generally, the percentage of schools offering each service remained about the same or increased over the 3-year period for both special population students and students who did not belong to these groups. Most schools offered general services which were available to special population students about as often as to other students. For example, about 90 percent of schools offered counseling/guidance to all student groups in 1990-91, increasing to about 95 percent in 1993-94. In addition, schools often provided more specialized support services at higher rates to special population students than to other students. For example, about 54 percent of schools in 1993-94 reported offering special or modified equipment to students with disabilities; only about 18 percent of schools offered this service to students who were not members of special populations. (See table II.5.) We asked the schools we surveyed to estimate the postgraduation status of the most recent senior class for which they had employment or education information. Because many of the schools we surveyed did not gather placement information at this level of detail, our estimates are less precise. We observed no significant differences in employment or education outcomes for special population students before and after the Perkins amendments. (See table II.6.) Outcomes reported in baseline survey Outcomes reported in follow-up survey Percentages may not add up to 100 because of rounding. We found signs that many of the schools we surveyed were making considerable efforts to improve the quality of their vocational education programs, although many of these efforts have yet to reach the majority of students. More schools are focusing on integrating academic and vocational instruction, creating or strengthening linkages to the business community, and gathering and using more information for self-assessment. As shown in figure 3 (see p.10), we observed an increase in the percentage of schools that reported participating to a “great” or “very great” extent in teacher training activities to integrate vocational and academic learning. Although schools appear to be moving toward integrating academic and vocational learning, many of the schools we surveyed had not yet applied integrated learning to one or more vocational programs. For example, less than 30 percent of the schools reported using team teaching in mathematics, English, or science. (See table II.7.) Team teaching (where academic and vocational education teachers work together) We also found that the schools we surveyed were trying to improve their ties to the local community. Compared with 1990-91, in 1993-94 schools reported greater contributions from the local community in a number of areas. For example, 21 percent of schools reported that more industry people teach in the school, 19 percent reported that more teachers work in local industries for professional development, and 31 percent reported that more outside organizations provide mentor programs or job shadowing. (See table II.8.) Type of contribution by business or organization “Much More” or “Somewhat More” “About As Much” “Somewhat Less” or “Much Less” “Don’t Know” Teachers work in local industry for professional development Industry people teach in the school Helps to develop or modify curriculum Consults with school about skills needed by students in workplace Donates money to vocational education program Donates material, supplies, or equipment to vocational education program Makes facilities available to students (other than through co-ops) By amending the Perkins Act to require states and school districts to continuously assess the performance of vocational-technical education programs, the Congress sent a clear message that it places importance on accountability and outcomes. However, the ability to evaluate program improvement is heavily dependent on the availability of data. Of the districts we surveyed, more are taking steps to use various indicators to assess their vocational-technical education programs in 1993-94 than in 1990-91. For example, the percentage of districts that used occupational competency standards in their program assessments increased from 68 percent to 85 percent. (See table II.9.) Number of students in vocational education programs Number of “high technology” programs Number of students participating in “high technology” programs Use of occupational competency standards Use of certificates of competency Placement rates (additional education, training, employment, or military service) Linkage with postsecondary vocational education programs Linkage with business or labor Integration of academics with vocational curriculum Coherent sequence of courses leading to an occupational skill Location of program (e.g., local high school, area vocational school, community college) Many of the districts we surveyed believed that the Perkins amendments have had a positive impact on their ability to improve their vocational education programs and services. Others believed that the Perkins amendments made little difference one way or another; but few reported the amendments adversely affected their ability to improve programs and services. Table II.10 provides specific information on districts’ views. Table II.10: Views of the Districts We Surveyed on the Perkins Amendments’ Effect on Their Ability to Produce Quality Programs “Greatly Increased” or “Increased” “Neither Increased nor Decreased” “Decreased” or “Greatly Decreased” “Don’t Know” Your district’s ability to purchase state-of-the-art equipment Your district’s ability to spend Perkins funds where needed most Your district’s ability to plan vocational programs and use Perkins funds Equity with which Perkins funding is allocated among districts Amount of record keeping required by state to meet Perkins requirements Extent of services your district offered to vocational-technical students in special populations Extent of services your district offered to vocational-technical students in general Access special population students have to vocational-technical programs Tutoring and remediation for vocational-technical students in general Your district’s program improvement efforts Technical education standards that students must achieve Academic education standards that students must achieve Use of applied curricula in vocational-technical courses Use of integration of academic and vocational-technical courses (continued) “Greatly Increased” or “Increased” “Neither Increased nor Decreased” “Decreased” or “Greatly Decreased” “Don’t Know” Elsie Picyk, Senior Computer Science Analyst, was responsible for computer programming and data analysis. Thomas Hubbs, Senior Evaluator, provided direction to the project at its earlier stages. Thomas Hungerford, Senior Economist, commented on drafts and assisted with the data analysis. Laurel Rabin, Communications Analyst, provided editing and writing assistance. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO compared student participation and program features in high school vocational education programs between the 1991 and 1994 school years, focusing on the: (1) availability of support services; (2) the extent to which program students attended college or found employment following graduation; and (3) extent to which schools have enhanced the quality of vocational education programs. GAO found that: (1) between 40 and 50 percent of students in special population groups participated in vocational education programs in 1990 through 1994, despite the removal of the set-aside requirement; (2) schools continued to offer all students access to support services at the same or greater levels in 1994 than in 1991; (3) there was no significant changes in the proportion of program students who attended college, went directly to work, or were unemployed; (4) the percentage of schools offering tech-prep programs increased from 27 percent in 1991 to 45 percent in 1994 and the percentage of students participating in the programs rose from 9 to 16 percent, respectively; (5) traditional school-to-work transition programs showed no major change in participation; (6) teacher training in integrating vocational and academic instruction also increased, but most of the schools surveyed did not use integrated learning concepts in the classroom; (7) some school districts reported increased use of quality indicators in their self-assessment processes, however, the number of vocational education programs that require graduates to meet competency standards has remained stable; and (8) many of the program features associated with high-quality vocational education still affect a relatively small percentage of students and many more students will need to be exposed to these features before they become a standard part of vocational education.
For over 50 years, antibiotics have been widely prescribed to treat bacterial infections in humans. Many antibiotics commonly used in humans have also been used in animals for therapeutic and other purposes, including growth promotion. Resistance to penicillin, which was the first broadly used antibiotic, started to emerge soon after its widespread introduction. Since that time, resistance to other antibiotics has emerged, and antibiotic resistance has become an increasing public health problem worldwide. Antibiotics kill most, if not all, of the susceptible bacteria that are causing an infection, but leave behind—or select, in biologic terms—the bacteria that have developed resistance, which can then multiply and thrive. Infection-causing bacteria that were formerly susceptible to an antibiotic can develop resistance through changes in their genetic material, or deoxyribonucleic acid (DNA). These changes can include the transfer of DNA from resistant bacteria, as well as spontaneous changes, or mutations, in a bacterium’s own DNA. The DNA coding for antibiotic resistance is located on the chromosome or plasmid of a bacterium. Plasmid-based resistance is transferred more readily than chromosomal-based resistance. Once acquired, the genetically determined antibiotic resistance is passed on to future generations and sometimes to other bacterial species. The dose of antibiotic and length of time bacteria are exposed to the antibiotic are major factors affecting whether the resistant bacteria population will dominate. Low doses of antibiotics administered over long periods of time to large groups of animals, such as doses used for growth promotion in animals, favor the emergence of resistant bacteria. To investigate the impact on human health of antibiotic use in animals, researchers have used both epidemiologic studies alone and epidemiologic studies combined with molecular subtyping of bacterial isolates. Epidemiologic studies examine patterns of health or disease in a population and the factors that influence these patterns. These studies help to identify the cause of a disease and the factors that influence a person’s risk of infection. Many studies investigating antibiotic-resistant bacteria and their impact on human health combine epidemiologic studies with molecular subtyping—also called “DNA fingerprinting”—a technique that translates bacteria’s genetic material into a “bar code” that can be used to identify specific pathogens and link them with disease outbreaks. For example, following an outbreak of a diarrheal disease among people in a community, an epidemiologic study would determine all the common exposures among the people with the disease, and molecular subtyping of bacterial isolates could determine what pathogens were responsible for the disease. While the use of antibiotics in animals poses potential human health risk, it is also an integral part of intensive animal production in which large numbers of poultry, swine, and cattle are raised in confinement facilities. (See fig. 1.) Antibiotics are used in animals to treat disease; to control the spread of a disease in a group of animals when disease is present in some of the animals; to prevent diseases that are known to occur during high-risk periods, such as after transport, when the animals are stressed; and to promote growth—that is, to allow animals to grow at a faster rate while requiring less feed per pound of weight gain. This use of antibiotics is commonly referred to as growth promotion and generally entails using low doses of antibiotics over long periods of time in large groups of animals. Many animal producers believe the use of antibiotics for growth promotion also prevents disease. Antibiotics are generally administered by injection to individual animals and in feed or water to groups of animals. Figure 2 shows how antibiotic-resistant bacteria that develop in animals can possibly be transferred to humans, who may then develop a foodborne illness, such as a salmonella infection, that is resistant to antibiotic treatment. Once the resistant bacteria develop in animals, they may be passed to humans through the consumption or handling of contaminated meat. An animal or human may carry antibiotic-resistant bacteria but show no signs or symptoms of an illness. Resistant bacteria may also be spread to fruits, vegetables, and fish products through soil, well water, and water runoff contaminated by waste material from animals harboring these bacteria, although such routes are beyond the focus of this report. Researchers in human medicine have debated the public health impact of antibiotic use in agriculture for many years. In the United States the debate intensified before FDA approved the first fluoroquinolone antibiotic for use in animals in 1995. At that time, drugs from the fluoroquinolone class had already been used for humans for nearly a decade. Debate focused on whether development of resistance to the drug approved for use in animals could, through cross-resistance, compromise the effectiveness of other drugs in the fluoroquinolone class that were valuable in treating human diseases. Efforts have been made to address the spread of antibiotic resistance by providing education to change behaviors of physicians and the public, but researchers differ on whether changes in agricultural practices are also needed. CDC has undertaken educational efforts aimed at physicians and the public. CDC is encouraging physicians to reduce prescribing antibiotics for infections commonly caused by viruses, such as ear and sinus infections. Patients are being taught that antibiotics are only for bacterial infections, not viral infections. Many researchers contend that efforts to reduce the use of antibiotics in animals are also needed to preserve the effectiveness of antibiotics necessary for treatment of bacterial diseases in humans and animals and to decrease the pool of resistant bacteria in the environment. However, agricultural industry officials argue that antibiotic use in animals is essential to maintaining the health of animals and therefore the safety of food. Professional organizations and associations differ on the use of antibiotics in animals. Many professional organizations that have studied the human health implications of antibiotic use in animals—including WHO and, in the United States, the Institute of Medicine of the National Academy of Sciences and the Alliance for the Prudent Use of Antibiotics—have recommended either limiting or discontinuing the use of antibiotic growth promoters. Many of the professional associations for human medicine— such as the American Medical Association, the American College of Preventive Medicine, the American Public Health Association, and the Council of State and Territorial Epidemiologists—have position statements for limiting antibiotic use in animals for nontherapeutic purposes, such as growth promotion, for antibiotics that are important for both human and animal health. Many of the professional associations for veterinary medicine—such as the American Veterinary Medical Association and the American Association of Swine Practitioners—agree on the goal of reducing the use of antibiotics in animals but differ on the means to achieve this goal. These associations are calling for veterinarians to work with owners of animals to implement judicious use guidelines. While limiting the use of antibiotics in animals for growth promotion may reduce the human health risk associated with antibiotic-resistant bacteria, such restrictions also may increase the cost of producing animals and the prices consumers pay for animal products. For example, a 1999 economic study estimated that a hypothetical ban on all antibiotic use in feed in swine production would increase U.S. consumers’ costs by more than $700 million per year. However, the increase in consumer costs would be much smaller if—as the Institute of Medicine proposed in 2003—producers were allowed to continue to use some antibiotics for growth promotion and only antibiotics that are used in humans were banned for growth promotion. Moreover, in other animal species, such as beef cattle or chickens, the economic impacts of growth promotion restrictions would likely be smaller than in swine because antibiotic use for growth promotion is less prevalent in the production of these other species. Appendix II summarizes studies of the economic effects of banning antibiotic use for growth promotion and other proposed restrictions on antibiotic uses in animals. The three federal agencies responsible for protecting Americans from health risk associated with drug use in animals are FDA, CDC, and USDA. These agencies have a variety of responsibilities related to surveillance, research, and regulation. All three agencies collaborate on surveillance activities, such as the National Antimicrobial Resistance Monitoring System—Enteric Bacteria (NARMS), which was initiated in 1996 because of public health concerns associated with the use of antibiotics in animals. In addition, FDA’s primary responsibilities as a regulatory body focus on human health and animal drug safety. CDC primarily conducts research and education that focus on human health. USDA oversees the retail meat trade, including related farm and slaughter operations. USDA activities may include studies of healthy farm animals, evaluations of diagnostic data involving sick animals, and biological sampling from slaughter and meat processing plants. USDA also conducts research and education related to antibiotic resistance. In addition, FDA approves for sale and regulates the manufacture and distribution of drugs used in veterinary medicine, including drugs given to animals from which human foods are derived. Prior to approving a new animal drug application, FDA must determine that the drug is safe and effective for its intended use in the animal. It must also determine that the new drug intended for animals is safe with regard to human health. FDA considers a new animal antibiotic to be safe if it concludes that there is reasonable certainty of no harm to human health from the proposed use of the drug in animals. FDA may also take action to withdraw an animal drug from the market when the drug is no longer shown to be safe. These three agencies also participate in the federal Interagency Task Force on Antimicrobial Resistance. Task force activities focus on antibiotic resistance from use of antibiotics in animals, as well as the human use of antibiotics. In January 2001, the task force developed an action plan based on advice from consultants from state and local health agencies, universities, professional societies, pharmaceutical companies, health care delivery organizations, agricultural producers, consumer groups, and other members of the public. The action plan includes 84 action items, 13 of which have been designated as top-priority items and cover issues of surveillance, prevention and control, research, and product development. A federal agency (or agencies) is designated as the lead for each action item. The United States is one of the world’s leading exporters of meat. In 2002, U.S. meat exports accounted for about $7 billion. The World Trade Organization (WTO), of which the United States is a member, provides the institutional framework for conducting international trade, including trade in meat products. WTO member countries agree to a series of rights and obligations that are designed to facilitate global trade. When a country regulates imports, including imported meat, WTO guidelines stipulate that member countries have the right to determine their own “appropriate levels of protection” in their regulations to protect, among other things, human and animal health. Member countries must have a scientific basis to have levels of protection that are higher than international guidelines. To encourage member countries to apply science-based measures in their regulations, WTO relies on the international standards, guidelines, and recommendations that its member countries develop within international organizations, such as the Codex Alimentarius Commission for food safety and the OIE for animal health and the safety of animal products for human consumption. While ensuring that food products are safe and of high quality usually promotes trade, one country’s food safety regulations could be interpreted by another country as a barrier to trade. It is difficult, however, to distinguish between a legitimate regulation that protects consumers but incidentally restricts trade from a regulation that is intended to restrict trade and protect local producers, unless that regulation is scientifically documented. Research has shown that antibiotic-resistant bacteria have been transferred from animals to humans, but the extent of potential harm to human health is uncertain. Evidence from epidemiologic studies suggests associations between patterns of antibiotic resistance in humans and changes in antibiotic use in animals. Further, evidence from epidemiologic studies that include molecular subtyping to identify specific pathogens has established that antibiotic-resistant campylobacter and salmonella bacteria are transferred from animals to humans. Many of the studies we reviewed found that this transference poses significant risks for human health. Researchers disagree, however, about the extent of potential harm to human health from the transference of antibiotic-resistant bacteria. Antibiotic-resistant bacteria have been transferred from animals to humans. Evidence that suggests that this transference has taken place is found in epidemiologic studies showing that antibiotic-resistant E. coli and campylobacter bacteria in humans increase as use of the antibiotics increases in animals. Evidence that establishes transference of antibiotic- resistant bacteria is found in epidemiologic studies that include molecular subtyping. These studies have demonstrated that antibiotic-resistant campylobacter and salmonella bacteria have been transferred from animals to humans through the consumption or handling of contaminated meat. That is, strains of antibiotic-resistant bacteria infecting humans were indistinguishable from those found in animals, and the researchers concluded that the animals were the source of infection. Evidence from epidemiologic studies that do not include molecular subtyping indicates that patterns of antibiotic resistance in humans are associated with changes in the use of particular antibiotics in animals. For example, work conducted in the United States in the 1970s showed an association between the use of antibiotic-supplemented animal feed in a farm environment and the development of antibiotic-resistant E. coli in the intestinal tracts of humans and animals. In the study, isolates from chickens on the farm and from people who lived on or near the farm were tested and found to have low initial levels of tetracycline-resistant E. coli bacteria. The chickens were then fed tetracycline-supplemented feed, and within 2 weeks 90 percent of them were excreting essentially all tetracycline-resistant E. coli bacteria. Within 6 months, 7 of the 11 people who lived on or near the farm were excreting high numbers of resistant E. coli bacteria. Six months after the tetracycline-supplemented feed was removed, no detectable tetracycline-resistant organisms were found in 8 of the 10 people who lived on or near the farm when they were retested. Another study, based on human isolates of Campylobacter jejuni submitted to the Minnesota Department of Health, reported that the percentage of Campylobacter jejuni in the isolates that were resistant to quinolone increased from approximately 0.8 percent in 1996 to approximately 3 percent in 1998. There is also evidence to suggest that antibiotic-resistant enterococcus has developed from the use of antibiotics in animals. Vancomycin resistance is common in intestinal enterococci of both exposed animals and nonhospitalized humans only in countries that use or have previously used avoparcin (an antibiotic similar to vancomycin) as an antibiotic growth promoter in animal agriculture. Since the EU banned the use of avoparcin as a growth promoter, several European countries have observed a significant decrease in the prevalence of vancomycin-resistant enterococci in meat and fecal samples of animals and humans. Epidemiologic studies that include molecular subtyping have demonstrated that antibiotic-resistant campylobacter and salmonella bacteria have been transferred from animals to humans through the consumption or handling of contaminated meat. That is, strains of antibiotic-resistant bacteria infecting humans were indistinguishable from those found in animals, and the authors of the studies concluded that the animals were the source of infection. The strongest evidence for the transfer of antibiotic-resistant bacteria from animals to humans is found in the case of fluoroquinolone-resistant campylobacter bacteria. Campylobacter is one of the most commonly identified bacterial causes of diarrheal illness in humans. The strength of the evidence is derived in part from the fact that the particular way fluoroquinolone resistance develops for campylobacter bacteria makes it easier to identify the potential source of the resistance. Most chickens are colonized with campylobacter bacteria, which they harbor in their intestines, but which do not make them sick. Fluoroquinolones are given to flocks of chickens when some birds are found to have certain infections caused by E. coli. In addition to targeting the bacteria causing the infection, treatment of these infections with fluoroquinolones almost always replaces susceptible campylobacter bacteria with fluoroquinolone-resistant campylobacter bacteria. Because fluoroquinolone resistance is located on the chromosome of campylobacter, the resistance is generally not transferred to other species of bacteria. Therefore when fluoroquinolone- resistant campylobacter bacteria are detected in human isolates, the source is likely to be other reservoirs of campylobacter bacteria, including animals. In some cases, molecular subtyping techniques have shown that fluoroquinolone-resistant isolates of campylobacter from food, humans, and animals are similar. Fluoroquinolone-resistant Campylobacter jejuni in humans has increased in the United States and has been linked with fluoroquinolone use in animals. CDC reported that in the United States the percentage of Campylobacter jejuni in human isolates that were resistant to fluoroquinolones increased from 13 percent in 1997 to 19 percent in 2001. A study in Minnesota found that fluoroquinolone-resistant Campylobacter jejuni was isolated from 14 percent of 91 chicken products obtained from retail markets in 1997. Through molecular subtyping, the strains isolated from the chicken products were shown to be the same as those isolated from nearby residents, thereby bolstering the case that the chickens were the source of the antibiotic resistance. During the 1980s, the resistance of campylobacter bacteria to fluoroquinolones increased in Europe. European investigators hypothesized that there was a causal relationship between the use of fluoroquinolones in animals and the increase in fluoroquinolone-resistant campylobacter infections in humans. For example, an epidemologic study that included molecular subtyping in the Netherlands found that among different strains of campylobacter bacteria, the percentage of fluoroquinolone-resistant strains in isolates tested had risen from 0 percent in both human and animal isolates in 1982 to 11 percent in human isolates and 14 percent in poultry isolates by 1989. The authors concluded that the use of two new fluoroquinolones, one in humans in 1985 and one in animals in 1987, was responsible for the quinolone-resistant strains. The authors asserted that the extensive use of fluoroquinolones in poultry and the common route of campylobacter infection from chickens to humans suggest that the resistance was mainly due to the use of fluoroquinolones in poultry. Several epidemiologic studies using molecular subtyping have linked antibiotic-resistant salmonella infections in humans, another common foodborne illness, to animals. For example, in 1998 bacteria resistant to ceftriaxone were isolated from a 12-year-old boy who lived on a cattle farm in Nebraska. Molecular subtyping revealed that an isolate from the boy was indistinguishable from one of the isolates from the cattle on the farm. No additional ceftriaxone-resistant salmonella infections were reported in that state or adjoining states that could have been the cause of the infection. Similarly, an epidemiologic study in Poland from 1995 to 1997 using molecular subtyping found identical profiles for ceftriaxone-resistant salmonella bacteria in isolates from poultry, feed, and humans. The researchers concluded that the salmonella infections were introduced in the poultry through the feed and reached humans through consumption of the poultry. Researchers in Taiwan also found that Salmonella enterica serotype choleraesuis bacteria that were resistant to ciprofloxacin in isolates collected from humans and swine were closely related and, following epidemiologic studies, concluded that the bacteria were transferred from swine to humans. Researchers have also documented human infections caused by multidrug- resistant strains of salmonella linked to animals. In 1982, researchers used molecular subtyping to show that human isolates of multidrug-resistant salmonella bacteria were often identical or nearly identical to isolates from animals. In the mid-1990s, NARMS data showed a rapid growth of multidrug resistance in Salmonella enterica serotype Typhimurium definitive type (DT) 104 among humans. Molecular subtyping found that human isolates with this strain of multidrug resistance in Salmonella enterica serotype Typhimurium DT104 in 1995 were indistinguishable from human isolates with this strain tested in 1985 and 1990. These results indicated that the widespread emergence of multidrug resistance in Salmonella enterica serotype Typhimurium DT104 may have been due to dissemination of a strain already present in the United States. Because food animals are the reservoir for most domestically acquired salmonella infections and transmission from animals to humans occurs through the food supply, the researchers concluded that the human infections were likely from the animals. Recently, there has been an emergence of multidrug-resistant Salmonella enterica serotype Newport infections that include resistance to cephalosporins, such as cefoxitin. Based on molecular subtyping, multidrug-resistant salmonella isolates from cattle on dairy farms were found to be indistinguishable from human isolates. An epidemiologic study found that the infections in humans were associated with direct exposure to a dairy farm, and the authors hypothesized that the infections were associated with handling or consuming the contaminated foods. The extent of harm to human health from the transference of antibiotic- resistant bacteria from animals is uncertain. Many studies have found that the use of antibiotics in animals poses significant risks for human health, and some researchers contend that the potential risk of the transference is great for vulnerable populations. However, a small number of studies contend that the health risks of the transference are minimal. Some studies have sought to determine the human health impacts of the transference of antibiotic resistance from animals to humans. For example, the Food and Agriculture Organization of the United Nations (FAO), OIE, and WHO recently released a joint report based on the scientific assessment of antibiotic use in animals and agriculture and the current and potential public health consequences. The report states that use of antibiotics in humans and animals alters the composition of microorganism populations in the intestinal tract, thereby placing individuals at increased risk for infections that would otherwise not have occurred. The report also states that use of antibiotics in humans and animals can also lead to increases in treatment failures and in the severity of infection. Similarly, a recent review of studies regarding increased illnesses due to antibiotic-resistant bacteria found significant differences in treatment outcomes of patients with antibiotic-resistant bacterial infections and patients with antibiotic-susceptible bacterial infections. For example, one study found that hospitalization rates of patients with nontyphoidal salmonella infections were 35 percent for antibiotic-resistant infections and 27 percent for antibiotic-susceptible infections. That study also found that the length of illness was 10 days for antibiotic-resistant infections versus 8 days for antibiotic-susceptible infections. Another study found diarrhea from Campylobacter jejuni infections lasted 12 days for antibiotic-resistant infections versus 6 days for susceptible infections. Also, based on this review, the authors estimated that fluoroquinolone resistance likely acquired through animals leads to at least 400,000 more days of diarrhea in the United States per year than would occur if all infections were antibiotic-susceptible. The authors estimated that antibiotic resistance from nontyphoidal salmonella infections mainly arising from animals could account for about 8,700 additional days of hospitalization per year. Experts are especially concerned about safeguarding the effectiveness of antibiotics such as vancomycin that are considered the “drugs of last resort” for many infections in humans. Evidence suggests that use of the antibiotic avoparcin in animals as a growth promoter may increase numbers of enterococci that are resistant to the similar antibiotic vancomycin. A particular concern is the possibility that vancomycin- resistant enterococci could transfer resistance to other bacteria. Some Staphylococcus aureus infections found in hospitals are resistant to all antibiotics except vancomycin, and human health can be adversely affected, as treatment could be difficult, if not impossible, if these strains develop resistance to vancomycin, too. Recently, two human isolates of Staphylococcus aureus were found to be resistant to vancomycin. With the increase in infections that are resistant to vancomycin, the streptogramin antibiotic quinupristin/dalfopristin (Q/D, also known as Synercid) has become an important therapeutic for life-threatening vancomycin-resistant enterococcus infections. Virginiamycin, which is similar to Q/D, has been used in animals since 1974, and Q/D was approved for human use in 1999. NARMS data from 1998 to 2000 indicate that Q/D- resistant Enterococcus faecium has been found in chicken and ground pork purchased in grocery stores, as well as in human stools. Experts hypothesize that use of virginiamycin in poultry production has led to Q/D- resistant bacteria in humans because the antibiotics are very similar, but the human health consequences of this have not been quantified. Experts are also concerned about risks to vulnerable populations such as individuals with compromised immune systems or chronic diseases, who are more susceptible to infections, including antibiotic-resistant infections. For example, salmonella infections are more likely to be severe, recurrent, or persistent in persons with human immunodeficiency virus (HIV). Another concern is that people with resistant bacteria could inadvertently spread those bacteria to hospitalized patients, including those with weakened immune systems. Although it is generally agreed that transference is possible, some researchers contend that the health risks of the transference are minimal. Proponents of this view note that not all studies have shown an increase in antibiotic-resistant bacteria. For example, one study conducted between 1997 and 2001 found no clear trend toward greater antibiotic resistance in salmonella bacteria. Proponents of this view also assert that restricting the use of antibiotics in animal agriculture could lead to greater levels of salmonella and campylobacter bacteria reaching humans through meat, thus increasing the risk of human infections. Conversely, some of these researchers also argue that the risk to humans of acquiring these infections from animals can be eliminated if meat is properly handled and cooked. They also cite a few studies that have concluded that the documented human health consequences are small. For example, they noted that one study estimated that banning the use of virginiamycin in animals in the U.S. would lower the number of human deaths by less than one over 5 years. FDA, CDC, and USDA have increased their surveillance activities related to antibiotic resistance in animals, humans, and retail meat since beginning these activities in 1996. New programs have been added, the number of bacteria being studied has increased, and the geographic coverage of the sampling has been expanded. In addition, all three agencies have sponsored research on the human health risk from antibiotic resistance in animals. FDA has taken several recent actions to minimize the human health risk of antibiotic resistance from animals, but the effectiveness of its actions is not yet known. These activities include administrative action to prohibit the use of the fluoroquinolone enrofloxacin (Baytril) for poultry and the development of a recommended framework for conducting qualitative risk assessments of all new and currently approved animal drug applications with respect to antibiotic resistance and human health risk. FDA, CDC, and USDA have six surveillance activities ongoing to identify and assess the prevalence of resistant bacteria in humans, animals, or retail meat. (See table 1.) Since 1996, these activities have expanded to include additional bacteria, greater geographic coverage, and new activities. Two of these activities—NARMS and Collaboration in Animal Health, Food Safety and Epidemiology (CAHFSE)—focus on antibiotic resistance from animals. The other four activities—Foodborne Diseases Active Surveillance Network (FoodNet), PulseNet, PulseVet, and National Animal Health Monitoring System (NAHMS)—focus on foodborne disease or animal health in general, not antibiotic resistance, but are nevertheless relevant to issues of antibiotic resistance. Figure 3 shows how these different surveillance activities provide data about various aspects of antibiotic resistance. NARMS monitors changes in susceptibilities of bacteria in humans and animals to antibiotics. To assess the extent of changes in levels of resistance, NARMS collects animal and human isolates of six different bacteria, specifically non-Typhi Salmonella, Campylobacter, E. coli, Enterococcus, Salmonella Typhi, and Shigella. These activities are conducted under three independent, yet coordinated, programs, with FDA serving as the funding and coordinating agency. The human program gathers isolates from humans and is led by CDC. The animal program, led by USDA, gathers isolates from animals on farms, from slaughter and processing plants, and from diagnostic laboratories. The retail meat program gathers samples of meat purchased at grocery stores and is run by FDA. The agencies work together to standardize results through ongoing quality control efforts. NARMS has expanded in three major ways—range of bacteria tested, geographic coverage, and number of programs—since it was established in 1996. For example, human NARMS started by looking at two bacteria and now studies six bacteria. Further, NARMS also assessed the potential of other bacteria to become sources of resistance by collecting and assessing listeria and vibrio isolates in pilot studies. With regard to geographic coverage, the number of participating health departments has increased from 14 state and local health departments in 1996 to all 50 states and Washington, D.C., in 2003. Finally, the retail meat program was added in 2002. Initially, 5 states participated in the retail meat program, but by 2004, 10 states were participating. Despite this recent expansion, all of NARMS experienced budget cuts in fiscal year 2004, calling into question future expansion efforts. For example, the USDA budget for the animal program was cut 17.6 percent for 2004. NARMS has also produced collaborative research efforts among FDA, CDC, and USDA and helped further scientific understanding of antibiotic resistance. For example, data from NARMS led CDC to conclude that the proportion of campylobacter isolates resistant to ciprofloxacin in 2001 was 2.4 times higher than in 1997. Similarly, FDA and CDC officials reported that NARMS data were used to evaluate antibiotic resistance to fluoroquinolones, and CDC officials told us that after NARMS data showed an increased number of cases of Salmonella Newport infections in humans, researchers at CDC and USDA shared human and animal isolates to determine whether the same pattern existed in animals. CAHFSE, established by USDA in 2003, collects samples from animals on farms to identify changes in antimicrobial resistance over time. The first animals that are being tested in the program are swine. USDA conducts quarterly sampling of 40 fecal and 60 blood samples from animals from farms in four states. As of March 2004, 40 farms were participating in CAHFSE. In addition to the laboratory analyses, there are plans for risk analyses, epidemiologic studies, and field investigations, as well as analysis of samples collected at slaughter, and the addition of more species, funding permitted. FoodNet, PulseNet, PulseVet, and NAHMS focus on foodborne disease or animal health rather than antibiotic resistance. FoodNet, the principal foodborne disease component of CDC’s Emerging Infections Program, is a collaborative project with 10 states (referred to as FoodNet sites), USDA, and FDA. The goals of FoodNet are to determine the incidence of foodborne diseases, monitor foodborne disease trends, and determine the proportion of foodborne diseases attributable to specific foods and settings. FoodNet data are derived from specimens collected from patients. Isolates from these specimens are sent to NARMS for susceptibility testing. CDC officials reported that one of every 20 patients with a specimen in FoodNet also has an isolate in NARMS. A recent development has been the linking of the NARMS and FoodNet data systems. For example, FoodNet data can be used to determine whether an individual was hospitalized, and NARMS data can reveal whether the bacteria that infected the person were resistant to antibiotics. CDC officials reported that because of the linked databases, they were able to determine whether, for example, someone with an antibiotic-resistant salmonella infection was more likely to be hospitalized than someone with an antibiotic-susceptible salmonella infection. FoodNet also has a role in the retail meat program of NARMS. The FoodNet sites purchase the meat samples from grocery stores, examine the samples for the prevalence or frequency of bacterial contamination, and forward isolates of the bacteria to FDA for susceptibility testing for antibiotic resistance. PulseNet is CDC’s early warning system for outbreaks of foodborne disease. USDA recently established a similar animal program, called PulseVet. PulseNet studies isolates from humans and suspected food, and PulseVet studies isolates from animals. Both PulseNet and PulseVet conduct DNA fingerprinting of bacteria and compare those patterns to other samples in order to identify related strains. The PulseNet and PulseVet isolates are tested for antibiotic resistance at CDC and USDA, respectively. FDA also performs DNA fingerprinting on salmonella and campylobacter isolates obtained from the retail meat program of NARMS and submits these data to PulseNet. NAHMS, which focuses on healthy animals, was initiated by USDA in 1983 to collect, analyze, and disseminate data on animal health, management, and productivity across the United States. Since 1990, USDA has annually conducted studies on animal health, including information about antibiotic use, through NAHMS. Each study focuses on different animals, including swine, cattle (both dairy and beef), and sheep. NAHMS provides only a snapshot of a particular species or commodity; it does not track changes over time. While NAHMS contributes information about healthy animals, a USDA official told us that it also includes information about antibiotics used and may include information on the route of administration and the reason for treatment, which can be useful in further understanding NARMS findings. In addition, researchers and veterinarians are able to access the NAHMS database for studies of disease incidence, risk assessment, and preventive treatment techniques. Further, bacteria samples obtained from NAHMS have been added to the NARMS database. Under the federal Interagency Task Force on Antimicrobial Resistance action plan, FDA, CDC, and USDA have initiated a number of research efforts that are relevant to antibiotic use in animals and human health. These ongoing research efforts focus on defining the effects of using various animal drugs on the emergence of antibiotic-resistant bacteria and identifying risk factors and preventive measures. Through CDC, FDA currently has cooperative agreements with four veterinary schools to study ways to reduce antibiotic-resistant bacteria in animals and is assessing the prevalence of antibiotic-resistant DNA in feed ingredients. In addition, FDA annually issues a 3-year research plan that describes research focusing on, among other things, antibiotic resistance in animals and its consequences for human health. Current studies include efforts to examine the consequences of antibiotic use in animals, the transmission of antibiotic resistance, and the processes underlying the spread of antibiotic resistance. In total, CDC has funded three projects under its Antimicrobial Resistance Applied Research extramural grant program. One of these grants, for example, is to study the prevalence of antibiotic-resistant E. coli in chicken and ground beef products, examine the risk factors for human colonization with a resistant strain of E. coli, and compare characteristics of antibiotic-susceptible and antibiotic-resistant isolates from meat with those of antibiotic-susceptible and antibiotic-resistant isolates from humans. Similarly, USDA has funded studies of antibiotic resistance in chicken, turkey, pork, and dairy products. These studies have provided additional sources of isolates to FDA for risk assessment purposes. Also, USDA’s Cooperative State Research, Education, and Extension Service has funded over 30 studies related to antibiotic resistance since 2000 and awarded an additional $8 million in grants in 1999 and 2000. Funded research includes studies on the prevalence, development, and possible transmission of antibiotic resistance; the epidemiology of antibiotic resistance; and the evaluation of management practices and potential prevention/intervention strategies for antibiotic resistance. FDA has taken a variety of actions to minimize the risk to the public health of antibiotic resistance in humans resulting from the use of antibiotics in animals, although it is still too early to determine the effectiveness of these actions. First, FDA has taken action to prohibit the use of an already approved animal drug for poultry because of concerns about human health risk. Second, the agency developed a recommended framework for reviewing all new animal antibiotic applications with respect to antibiotic resistance and human health risk. Third, FDA has begun reviewing antibiotics currently approved for use in animals according to its new framework to determine whether FDA needs to act to ensure that the drugs are safe. It is too early to determine the effectiveness of FDA’s review of currently marketed drugs. FDA has not made drugs used in animals that are critically important for human health its top priority for review, and any remedial actions pursued by the agency may take years to complete. On October 31, 2000, FDA proposed withdrawing the approval of enrofloxacin (Baytril), a fluoroquinolone drug used in poultry, after human health risks associated with the use of the drug in chickens and turkeys were documented by, among others, NARMS. Enrofloxacin is administered to flocks of poultry in their water supply to control mortality associated with E. coli and Pasteurella multocida organisms. FDA had found that new evidence, when evaluated with information available when the application was approved, demonstrated that enrofloxacin used with poultry flocks has not been shown to be safe for humans. Specifically, FDA determined that the use of enrofloxacin in poultry causes the development of a fluoroquinolone-resistant strain of campylobacter in poultry, which, when transferred to humans, is a significant cause of fluoroquinolone- resistant campylobacter infections in humans. Before proceeding with formal efforts to withdraw approval for use of enrofloxacin with poultry flocks, FDA considered a number of alternative actions. For example, the agency determined that changing the label to limit use to the treatment of individual birds and limiting use to one time or one treatment per individual bird were impractical. The agency also considered and rejected the establishment of a registry that would require veterinarians to demonstrate the need for the drug. FDA proceeded with its efforts to withdraw approval of enrofloxacin for use in poultry because FDA knew that there were alternative effective drugs for treating these illnesses in poultry. In February 2002, FDA announced that a hearing would be held on the proposal to withdraw approval of enrofloxacin. Since FDA’s proposed action to ban the use of enrofloxacin in poultry, representatives of both FDA and Bayer, the manufacturer of Baytril, as well as numerous experts, have provided testimony on the question of its safety. Submission of written testimony was due in December 2002, and cross-examination of witnesses took place from late April 2003 through early May 2003. The final posthearing briefs and responses were delivered in July and August 2003. On March 16, 2004, an FDA administrative law judge issued an initial decision withdrawing the approval of the new animal drug application for Baytril. This decision will become final unless it is appealed to the FDA Commissioner by Bayer or another participant in the case or the Commissioner chooses to review it on his own initiative. If the Commissioner reviews and upholds the initial decision, Bayer or another participant may choose to appeal in court. FDA has determined that the human health risk from antibiotic use in animals is not acceptable, and the agency may initiate risk management strategies to contain such risk. In October 2003, as part of its efforts to approve and regulate animal drugs, FDA issued Guidance for Industry #152. The guidance outlines a framework for determining the likelihood that an antibiotic used to treat an animal would cause an antibiotic resistance problem in humans who consume meat or other food products from animals. The guidance’s risk assessment framework is based on three factors—the probability that resistant bacteria are present in the target animal, the probability that humans would ingest the bacteria in question from the relevant food commodity, and the probability that human exposure to resistant bacteria would result in an adverse health consequence. The resulting overall risk estimate is ranked as high, medium, or low. Because the guidance is new, it is not yet known how the results of a risk assessment conducted according to the guidance will influence FDA’s decisions to approve new drug applications. Agency officials told us that FDA has never denied a new or supplemental animal drug application because of evidence that the drug caused antibiotic resistance in humans. In addition, the risk assessment guidance states that drugs with high risk may still be approved, though with specific use restrictions, if there is a reasonable certainty of no harm to human health when the drug is approved. These restrictions might include availability only by prescription, restrictions on uses not specified on the label (known as extralabel use), limitations for use in individual animals (versus groups of animals) for fewer than 21 days, and requirements for postapproval monitoring. FDA has previously used these kinds of restrictions with some drugs. While agency officials told us that the extralabel use prohibitions for animal drugs have generally reduced unauthorized use, such use restrictions may not prevent human health risk. For example, while FDA had earlier limited fluoroquinolones to use by or under the order of a veterinarian and prohibited the extralabel use of fluoroquinolones, the agency has now concluded that a human health risk exists despite these restrictive measures. FDA officials reported that the agency has reviewed about seven new drug applications using the risk assessment framework in Guidance for Industry #152. Some of those drugs have been approved. Other drugs have been approved but with label claims different from those requested in the application. FDA officials have not denied approval to any of these new drug applications. To determine whether future regulatory actions may be necessary, FDA is conducting risk assessments for drugs currently used in animal agriculture that are also important for human medicine. FDA began with two quantitative risk assessments for drugs ranked as critically important for human health at the time the assessments were initiated. FDA completed the assessment for fluoroquinolones in October 2000 and expects to complete the assessment for virginiamycin, a streptogramin drug related to Synercid, its counterpart for humans, in 2004. The quantitative risk assessments calculate estimates of the number of cases of infection. Agency officials told us that they had hoped that the quantitative risk assessment approach would provide a template for future risk assessments. However, FDA decided that it did not. FDA officials told us that as a result, the agency plans to review other currently marketed antibiotics using the qualitative risk assessment framework outlined in Guidance for Industry #152, which uses broad categories to assess risk. An FDA official reported that if the information necessary to complete any section of the qualitative risk assessment were unavailable, the agency would assign a higher score to the product, to err on the side of caution. After outlining possible risk management steps, if any, the agency would allow a drug’s sponsor (generally pharmaceutical firms) to provide additional information to help FDA reconsider its risk estimate. Generally, these qualitative risk assessments are considered to be a starting point for examining human health risk for some drugs. FDA has not made drugs that are critically important for human health its top priority for review. (See app. III for more detail on evaluating the importance of an animal drug for human health.) Instead, the agency focused its first qualitative risk assessments on subtherapeutic penicillin and tetracycline drugs. These assessments are expected to be completed by April 2004. FDA officials told us that the agency will then conduct qualitative risk assessments for therapeutic penicillin and tetracycline drugs, followed by assessments for those drugs that are defined in Guidance for Industry #152 as critically important for human health. As of March 2004, there were four such categories of drugs. For a number of reasons, it is not known whether FDA’s new framework for reviewing currently approved and marketed animal drugs will be able to effectively identify and reduce any human health risk. First, under this plan, it may take years for FDA to identify and reduce any human risk of acquiring antibiotic resistance from meat. FDA has not developed a schedule for conducting the qualitative risk assessments on the currently approved drugs, and the assessments may take a significant amount of time to complete. For example, based on the current schedule, FDA officials told us they expect the qualitative risk assessment of subtherapeutic penicillins and tetracyclines, which were begun in 2002, to take nearly 2 years to complete. Second, FDA officials told us that the risk estimation from the qualitative risk assessments will only use data already available in the original new drug application and any supplemental drug applications, rather than actively seeking new evidence. However, FDA told us that new evidence was an important factor in its risk assessment of fluoroquinolones. Finally, while FDA can pursue a number of enforcement options if its reviews uncover a human health risk, it is not known if they will be effective or how long it will take for such changes to take effect. As the enrofloxacin case demonstrates, risk management strategies may not mitigate human health risk, and administrative proceedings can extend for several years after FDA decides to take enforcement action. An FDA official also told us that if the drug sponsor voluntarily cooperates in implementing risk management strategies, lengthy administrative proceedings may be avoided. Although they have made some progress in monitoring antibiotic resistance associated with antibiotic use in animals, federal agencies do not collect data on antibiotic use in animals that are critical to supporting research on the human health risk. Data on antibiotic use would allow agencies to link use to the emergence of antibiotic-resistant bacteria, help assess the risk to human health, and develop strategies to mitigate resistance. FDA and USDA do not collect these data because of costs to the industry and other factors. Countries that collect antibiotic use data, depending on the amount and type of data collected, have been able to conduct more extensive research than U.S. agencies. According to FDA, CDC, and USDA, more data are needed on antibiotic use in animals in order to conduct further research on antibiotic resistance associated with this use. In particular, FDA has stated that it needs information on the total quantity of antibiotics used in animals, by class; the species they are used in; the purpose of the use, such as disease treatment or growth promotion; and the method used to administer the antibiotic. WHO and OIE have also recommended that countries collect such data. This information could be used for the following: To link antibiotic use to emerging strains of antibiotic-resistant bacteria. Antibiotic use information would clarify the relationship between resistance trends in NARMS and the actual use of antibiotics. For example, detailed on-farm data on antibiotic use and other production practices that are linked to bacteria samples from animals could help identify the conditions under which resistant bacteria develop. To help assess risk to human health. Information on antibiotic use would help assess the likelihood that humans could be exposed to antibiotic-resistant bacteria from animals. This potential exposure is important in determining the risk that antibiotic use in animals may pose to human health. To develop and evaluate strategies to mitigate resistance. Data on antibiotic use would help researchers develop strategies for mitigating increased levels of resistant bacteria in animals, according to CDC officials. Strategies could be developed based on such factors as the way the drug is administered, dosage levels, or use in a particular species. In addition, unless data are available for monitoring the effects of these interventions, researchers cannot assess the strategies’ effectiveness. FDA recognizes that additional data on antibiotic use in animal production would facilitate research on the linkages to human resistance. To that end, FDA had considered a plan that would have required pharmaceutical companies to provide more detailed information on antibiotics distributed for use in animals. This information would have been reported as a part of FDA’s ongoing monitoring of these antibiotics after their approval. However, according to FDA officials, this more detailed reporting would have resulted in significant costs to the pharmaceutical industry. Consequently, FDA is analyzing other options to minimize the burden to the industry. In addition, the information that USDA collects through NAHMS is of limited use for supporting research on the relationship between antibiotic use in animals and emerging antibiotic-resistant bacteria. NAHMS was not designed to collect antibiotic use data; instead, as previously discussed, its main goal is to provide information on U.S. animal health, management, and productivity. Through NAHMS, USDA does collect some data on antibiotic use, but only periodically and only for certain species. For example, it has studied the swine industry every 5 years since 1990 but has not yet studied broiler chickens—the most common type of poultry Americans consume. USDA’s Collaboration in Animal Health, Food Safety and Epidemiology (CAHFSE) is a new program designed to enhance understanding of bacteria that pose a food safety risk. USDA plans to monitor, over time, the prevalence of foodborne and other bacteria, as well as their resistance to antibiotics on farms and in processing plants. These data are expected to facilitate research on the link between agricultural practices, such as the use of antibiotics, and emerging resistant bacteria. Currently, however, CAHFSE does not provide information on the impact of antibiotic use for species such as poultry and cattle and for a significant portion of the swine industry. According to USDA, CAHFSE funding comes primarily from a limited amount of funding that is redirected from other USDA programs, and the program would need additional funding before it could expand to cover processing plants, more swine operations, or other species. USDA officials told us they plan to coordinate data collection and analysis efforts for CAHFSE with NARMS activities at FDA and CDC. According to the officials we spoke with at market research firms, private companies also collect some data on antibiotic use, but this information is developed for commercial purposes and is not always available for public research. These companies collect information on animal production practices, including antibiotic use, and sell this information to producers, who use it to compare their production costs and practices with those of other producers. They also sell these data to pharmaceutical companies, which use the information to estimate the future demand for their products. In any case, the market research firms do not design their data collection efforts to assist research on antibiotic resistance. Unlike the United States, other countries, such as Denmark, New Zealand, and the United Kingdom, collect more extensive data on antibiotic use in animals. Among the countries we examined, Denmark collects the most comprehensive and detailed data, including information on the quantities of antibiotics used in different animal species by age group and method of administration. According to Danish researchers, these data have allowed them to take the following actions: Link antibiotic use in animals to emerging strains of antibiotic- resistant bacteria. Danish researchers have been able to determine how changes in the consumption of antibiotics in animals affect the occurrence of antibiotic-resistant bacteria. In addition, researchers began collecting additional data on antibiotic-resistant bacteria in humans in 2002, allowing them to explore the relationship between levels of antibiotic-resistant bacteria in animals, food, and humans. Develop strategies to mitigate resistance. By monitoring trends in antibiotic use and levels of antibiotic-resistant bacteria, Denmark has been able to adjust national veterinary use guidelines and revised regulations to minimize potential risk to human health. Other countries, such as New Zealand and the United Kingdom, have data collection systems that are not as comprehensive as Denmark’s. Nevertheless, these nations collect data on total sales for antibiotics used in animals by class of antibiotic. The United Kingdom is also working to more accurately track the sales of antibiotics for use in different species. These data show trends in use over time and identify the importance of different antibiotic classes for the production of livestock and poultry. According to the official responsible for the United Kingdom’s data collection system, collecting these data requires few resources. In addition, Canadian officials told us Canada is collecting some data on antibiotic use on farms and expects to collect data on sales of antibiotics used in animals. Canada also plans to develop comprehensive methods to collect use data and integrate these data into its antibiotic resistance surveillance system. According to Canada’s first annual report on antibiotic resistance, issued in March 2004, its next annual report will include some information on antibiotic use in animals. See appendix IV for information on other countries’ data collection systems. The United States and several of its key trading partners, such as Canada and South Korea, and its competitors, such as the EU, differ in their use of antibiotics in animals in two important areas: the specific antibiotics that can be used for growth promotion and the availability of antibiotics to producers (by prescription or over the counter). With respect to growth promotion in animals, the United States, as well as Australia, Canada, Japan, and South Korea, allow the use of some antibiotics from classes important in human medicine. However, the United States and Australia are currently conducting risk assessments to determine whether to continue to allow the use of some of these antibiotics for growth promotion. Canada plans to conduct similar risk assessments, and Japan is reviewing the use of antibiotics for growth promotion if those antibiotics are from classes used in humans. In contrast, New Zealand has completed its risk assessments of antibiotics used for growth promotion and no longer allows the use of any antibiotics for growth promotion that are also related to antibiotics used in human medicine. Similarly, the EU has prohibited its member countries from using antibiotics in feed for growth promotion if those antibiotics are from antibiotic classes used in human medicine. In addition, the EU has issued a regulation that will prohibit the use of all other antibiotics in feed for growth promotion by 2006. We found differences among the United States’ and other countries’ use of antibiotics for growth promotion in the following four antibiotic classes that FDA has ranked as critically or highly important in human medicine: Macrolides. The United States, Canada, and South Korea allow antibiotics from the macrolide class for growth promotion, but the EU and New Zealand do not. In the United States, tylosin, a member of this class, is among the most commonly used antibiotics for growth promotion in swine. As of March 2003, Australia allowed antibiotics from the macrolide class for growth promotion, but it had a review under way on some antibiotics in this class, including tylosin, to determine if growth promotion use should continue. Penicillins and tetracyclines. The United States, Canada, and South Korea allow certain antibiotics from these two classes to be used for growth promotion, but Australia, the EU, Japan, and New Zealand do not. Furthermore, as mentioned earlier, the United States is currently conducting risk assessments on these two classes to determine whether to continue allowing their use for growth promotion. Streptogramins. The United States, Canada, and South Korea allow the use of virginiamycin, an antibiotic from this class, for growth promotion, but the EU and New Zealand do not. The United States is conducting a risk assessment on the use of virginiamycin for growth promotion and disease prevention. As of April 2003, Australia permitted virginiamycin for growth promotion, but the Australian agency that regulates antibiotic use in animals has recommended that approval of this use be withdrawn. Appendix V lists antibiotics—including antibiotics from the above classes—that are frequently used in U.S. animal production. With regard to the availability of antibiotics to livestock and poultry producers, public health experts advocate requiring a veterinarian’s prescription for the sale of antibiotics. They believe that this requirement may help reduce inappropriate antibiotic use that could contribute to the emergence of antibiotic-resistant bacteria in animals and the human health risk associated with these resistant bacteria. The United States and Canada permit many antibiotics to be sold over the counter, without a veterinarian’s prescription, while the EU countries and New Zealand are more restrictive regarding over-the-counter sales. The United States and Canada generally allow older antibiotics, such as sulfamethazine, to be sold over the counter, but they require a prescription for newer antibiotics, such as fluoroquinolones. In addition, with regard to the availability of antibiotics from antibiotic classes that are important in human medicine, the United States and Canada allow livestock and poultry producers to purchase several antibiotics over the counter, including penicillins, tetracyclines, tylosin, and virginiamycin. However, Canada is considering changing its rules to require prescriptions for antibiotics used in animals for all antibiotic uses except growth promotion. In contrast, the EU countries and New Zealand are more restrictive regarding over-the-counter sales of antibiotics for use in animals. Unlike the United States and Canada, the EU does not allow penicillins, tetracyclines, tylosin, and virginiamycin to be sold over the counter and will end all over-the-counter sales by 2006. Denmark, an EU member, already prohibits all over-the-counter sales. Similarly, New Zealand requires producers to have a veterinarian’s prescription for antibiotics that it has determined are associated with the development of resistant bacteria in humans. Appendix IV contains additional information on the key U.S. trading partners and competitors discussed in this section, including, as previously mentioned, their systems for collecting data on antibiotic use. To date, antibiotic resistance associated with use in animals has not been a significant factor affecting U.S. trade in meat products, according to officials of USDA’s Foreign Agricultural Service, the Office of the U.S. Trade Representative, the U.S. Meat Export Federation, and the U.S. Poultry and Egg Export Council. However, the presence of antibiotic residues in meat has had some impact on trade. In particular, Russia has previously banned U.S. poultry because of the presence of tetracycline residues. Furthermore, these officials indicated that other issues have been more prevalent in trade discussions, including the use of hormones in beef cattle and animal diseases such as bovine spongiform encephalopathy (commonly referred to as mad cow disease) and avian influenza. For example, the EU currently bans U.S. beef produced with hormones. Many other nations ban the import of U.S. beef because of the recent discovery of an animal in the United States with mad cow disease. Although federal government and industry officials stated that antibiotic use in animals has not significantly affected U.S. trade to date, we found some indication that this issue might become a factor in the future. As USDA reported in 2003, antibiotic use in animals could become a trade issue if certain countries apply their regulations on antibiotic use in animals to their imports. For example, according to some government and industry officials, the United States’ use of antibiotics could become a trade issue with the EU as it phases out its use of all antibiotics for growth promotion by 2006. However, the EU is not currently a significant market for U.S. meat because of trade restrictions, such as its hormone ban that effectively disallows U.S. beef. Similarly, a Canadian task force reported in June 2002 that the issue of antibiotic resistance and differences in antibiotic use policies could become a basis for countries to place trade restrictions on exports of meat from countries that have less stringent use policies. The issue of antibiotic use in animals and of the potential human health risk associated with antibiotic-resistant bacteria have also received international attention. For example, in 2003, the Codex Alimentarius Commission, an international organization within which countries develop food safety standards, guidelines, and recommendations, issued draft guidance for addressing the risk of antibiotic resistance in animals. Codex also requested that a group of experts assess the risk associated with antibiotic use in animals and recommend future risk management options. In December 2003, these experts concluded that the risk associated with antibiotic-resistant bacteria in food represents a significantly more important human health risk than antibiotic residues—an issue that countries have already raised as a trade concern. Antibiotics have been widely prescribed to treat bacterial infections in humans, as well as for therapeutic and other purposes in animals. Resistance to antibiotics is an increasing public health problem in the United States and worldwide. Published research results have shown that antibiotic-resistant bacteria have been transferred from animals to humans. In evaluating the safety of animal drugs, FDA considers their effect on human health. Such drugs are safe in this regard if there is reasonable certainty of no harm to humans when the drug is used as approved. Using this critieria, FDA has determined that the potential health risk from transference of antibiotic resistance from animals to humans is unacceptable and must be a part of FDA’s regulation of animal antibiotics. FDA, CDC, and USDA have made progress in their efforts to assess the extent of antibiotic resistance from the use of antibiotics in animals through both individual and collaborative efforts, including work through the Interagency Task Force. However, the effectiveness of these efforts remains unknown. FDA has developed guidance to evaluate antibiotics used in animals and intends to review all new drug applications and antibiotics currently approved for use with animals for this risk to determine if it needs to act to ensure that the drugs are safe. Although FDA has recently begun the reviews using this approach, its initial reviews have been for drugs other than those that are critically important for human health. FDA officials do not know how long each review will require. In addition, it is not yet known what actions FDA would take if concerns became evident. Although the agency has the authority to deny or withdraw approval of new or approved animal antibiotics that pose such a risk, FDA also has a variety of other options available. However, FDA action to prohibit the use of fluoroquinolone antibiotics in poultry has continued for more than 3 years. Finally, researchers and federal agencies still do not have critical data on antibiotic use in animals that would help them more definitively determine any linkage between use in animals and emerging resistant bacteria, assess the relative contribution of this use to antibiotic resistance in humans, and develop strategies to mitigate antibiotic resistance. The experience of countries such as Denmark indicates that data collection efforts are helpful when making risk-based decisions about antibiotic use in animals. While we recognize that there are costs associated with collecting additional data on antibiotic use in animals, options exist for collecting these data that are not cost-prohibitive. For example, the United Kingdom’s efforts to collect national sales data on antibiotic use in animals use relatively few resources. In addition, existing federal programs, such as FDA’s ongoing monitoring of approved antibiotics and USDA’s CAHFSE, can provide a data collection framework that can be expanded to begin collecting the needed data. FDA, CDC, and USDA recognize the importance of such information and have taken some steps to collect data, although they have not yet developed an overall collection strategy. Until the agencies have implemented a plan to collect critical data on antibiotic use in animals, researchers will be hampered in their efforts to better understand how this use affects the emergence of antibiotic-resistant bacteria in humans, and agencies will be hampered in their efforts to mitigate any adverse effects. Because of the emerging public health problems associated with antibiotic resistance in humans and the scientific evidence indicating that antibiotic- resistant bacteria are passed from animals to humans, we recommend that the Commissioner of FDA expedite FDA’s risk assessments of the antibiotics used in animals that the agency has identified as critically important to human health to determine if action is necessary to restrict or prohibit animal uses in order to safeguard human health. Additionally, because more data on antibiotic use in animals—such as the total quantity used, by class; the species in which they are used; the purpose of the use, such as disease treatment or growth promotion; and the method used to administer—are needed to further address the risk of antibiotic resistance, we also recommend that the Secretaries of Agriculture and of Health and Human Services jointly develop and implement a plan for collecting data on antibiotic use in animals that will adequately (1) support research on the relationship between this use and emerging antibiotic-resistant bacteria, (2) help assess the human health risk related to antibiotic use in animals, and (3) help the agencies develop strategies to mitigate antibiotic resistance. We provided USDA and HHS with a draft of this report for review and comment. We also provided segments of the draft related to trade matters to the Department of State and the Office of the U.S. Trade Representative. In their written comments, USDA and HHS generally agreed with the report and provided comments on certain aspects of our findings. USDA stated that our report recognized the many issues and complexities of efforts to address the risk to humans from antibiotic use in animals. The department also provided information on the extent of research related to antibiotic resistance that it has funded since 1998. We added this information to the report. Regarding our conclusion that antibiotic- resistant salmonella and campylobacter bacteria have been transferred from animals to humans, USDA agreed that it is likely that a transfer has occurred. However, USDA suggested that some of the studies we cited to support that conclusion were, by themselves, inadequate to support a causal link. We believe that our conclusion is firmly supported by a body of scientific evidence, but we have clarified our description of some studies in response to USDA’s comments. On the issue of human health risks, USDA commented that we cited few sources of scientific evidence to support the view that the human health risks from the transference of antibiotic- resistant bacteria are minimal. We found that only a few studies have concluded that the risk is minimal, while many studies have concluded that there is a significant human health risk from the transference. With respect to our recommendation that USDA and HHS jointly develop and implement a plan for collecting data on antibiotic use in animals, USDA stated that our report highlights the importance of the data that the CAHFSE program could provide on the impact of antibiotic use in various animal species. However, USDA pointed out that additional funding resources would be needed to expand CAHFSE and other data collection and research efforts. We revised the report to better reflect USDA’s concern about funding. HHS agreed with our finding that antibiotic-resistant salmonella and campylobacter bacteria have been transferred from food animals to humans. HHS provided references to additional research studies that support our conclusion. We were aware of all of the studies cited by HHS, but we did not include them in the report because we believe that our conclusion was already amply supported. Regarding our conclusion that researchers disagree about the extent of human health risk caused by the transference of antibiotic resistance, HHS provided information from an unpublished study that found that the course of illness was significantly longer for persons with antibiotic-resistant campylobacter cases than for those with antibiotic-susceptible infections. Most of the studies we identified found modest but significant human health consequences, similar to those in the unpublished study described in HHS’s comments. Regarding our recommendation that the agencies jointly develop and implement a plan for collecting data on antibiotic use in animals, HHS stated that the most useful and reliable antibiotic use data are those maintained by pharmaceutical companies. HHS said current regulations would have to be revised to put the data that pharmaceutical companies are required to report to FDA in a more relevant format for research on antibiotic resistance. As the two agencies develop and implement their plan to collect the relevant data, if they agree that pharmaceutical companies are an important source, they should take whatever regulatory actions might be necessary if the sources they identify will not provide the data voluntarily. HHS also proposed that discussions between HHS and USDA for improving antibiotic use data collection be conducted through the Interagency Task Force on Antimicrobial Resistance. We note that while USDA’s comments on antibiotic use data emphasized collecting on-farm data through its new CAHFSE program, HHS’s comments focused on obtaining data on antibiotic use in animals from pharmaceutical companies. We believe these differing approaches illustrate the need for USDA and HHS to jointly develop and implement a plan to collect data. We agree with HHS that the Interagency Task Force could serve as a forum for discussions between USDA and HHS on this matter. USDA’s written comments and our more detailed responses to them are in appendix VI. HHS’s written comments are in appendix VII. In addition, HHS, USDA, the Department of State, and the Office of the U.S. Trade Representative provided technical comments, which we incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to the Secretaries of Agriculture and of Health and Human Services and of State; the U.S. Trade Representative; and other interested officials. We will also provide copies to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please call Marcia Crosse at (202) 512-7119 or Anu Mittal at (202) 512-3841. Other contacts and key contributors are listed in appendix VIII. This report examines the (1) scientific evidence regarding the transference of antibiotic resistance from animals to humans through the consumption or handling of contaminated meat, and the extent of potential harm to human health, (2) progress federal agencies have made in assessing and addressing the human health risk of antibiotic use in animals, (3) types of data that federal agencies need to support research on the human health risk of antibiotic use in animals and the extent to which these data are collected, (4) use of antibiotics in animals in the United States compared with antibiotic use by its key agricultural trading partners and competitors, and (5) information that is available on the degree to which antibiotic use in animals has affected U.S. trade. We used the term “animal” to refer to animals raised for human consumption, such as cattle, sheep, swine, chickens, and turkeys; the term “meat” to refer to beef, lamb, pork, chicken, and turkey; and the term “contaminated meat” to refer to meat that contains antibiotic-resistant bacteria. We limited the scope of our work to the transference of antibiotic- resistant bacteria from animals to humans through the consumption or handling of contaminated meat. Specifically, we looked at the evidence for transference of antibiotic-resistant foodborne intestinal pathogens from animals to humans. We did not examine issues related to antibiotics used on plants and seafood, antibiotic residues in animals, or the effects of antibiotics present in the environment because of the application of animal waste to agricultural lands. To examine the scientific evidence regarding the transference of antibiotic resistance from animals to humans through the consumption or handling of contaminated meat, and the extent of harm to human health, we searched medical, social science, and agricultural databases, which included the Department of Health and Human Services’ (HHS) National Library of Medicine, for studies published in professional journals. We identified articles published since the 1970s on antibiotic use and resistance in animals and humans, as well as articles on antibiotic-resistant foodborne illnesses. We interviewed officials from HHS’s Food and Drug Administration (FDA) and Centers for Disease Control and Prevention (CDC) and the U.S. Department of Agriculture (USDA) to determine how these agencies are assessing the human health risk of antibiotic use in animals. We also reviewed reports related to the human health risk of antibiotic use in animals. Finally, we interviewed officials from relevant professional organizations (e.g., the American Medical Association) and public health advocacy groups (e.g., the Center for Science in the Public Interest) to identify other data or studies on the issue of human health risk from antibiotic use in animals. To determine federal agencies’ progress in assessing and addressing the human health risk of antibiotic use in animals, we examined documents from FDA, CDC, and USDA. These documents include reports on results from the federal government’s antibiotic resistance surveillance program and on the progress of the federal Interagency Task Force on Antimicrobial Resistance, documents presented in an FDA administrative court concerning the agency’s proposal to withdraw the approval of the use of a certain antibiotic used in poultry that is also an important antibiotic in human medicine, and FDA’s framework to assess the human health risk of antibiotic use in animals. To examine the types of data that federal agencies need concerning antibiotic use in animals in order to support research on the human health risk and the extent to which these data are collected, we reviewed federal agency documents and reports and interviewed FDA, CDC, and USDA officials. In particular, we discussed the status of FDA’s efforts to collect data on U.S. antibiotic use in animals, the status of USDA’s programs that collect data on antibiotic use, and CDC’s initiatives that would benefit from use data. We reviewed foreign government reports to determine how other countries use antibiotic use data for research; we also reviewed international reports from the World Health Organization (WHO) and the Office International des Epizooties (OIE), which provide guidelines on the types of use data that countries should collect. We also interviewed officials from Denmark, which collects extensive data on antibiotic use in animals, and from Canada, which plans to implement a data collection system. We discussed the availability of data on U.S. antibiotic use in animals with officials from pharmaceutical companies, industry associations, state veterinary offices, firms that collect data on antibiotic use in animals, and public health advocacy groups. To examine how the use of antibiotics in animals in the United States compares with antibiotic use by its key agricultural trading partners and competitors, we obtained and reviewed information on antibiotic use in animals for the United States and its key partners and competitors in international meat trade. Using international trade data, we identified the European Union (EU) and 11 countries—Australia, Brazil, Canada, China, Denmark, Hong Kong, Japan, Mexico, New Zealand, Russia, and South Korea—as key U.S. trading partners or competitors. We obtained information on countries’ antibiotic use in animals through discussions with officials of USDA’s Animal and Plant Health Inspection Service and Foreign Agricultural Service (FAS) and literature searches to identify relevant documents. In addition, we discussed antibiotic use in animals with government officials from Canada, a leading U.S. trading partner and competitor, and Denmark, a leading U.S. trading partner and competitor that took significant actions to curtail antibiotic use in animals during the late 1990s. We also e-mailed a questionnaire to FAS agricultural attachés in the EU and the key trading partner or competitor countries, except Canada and Denmark. For Canada and Denmark, we obtained responses to this questionnaire from Canadian and Danish government officials as part of our visits to these countries. We did not send this questionnaire to government officials of the EU and the other nine countries because of Department of State and FAS officials’ concerns that antibiotic use in animals may be a sensitive issue for some foreign governments and that some governments may be suspicious about the questionnaire’s underlying purposes; for the same reasons, in completing this questionnaire, the FAS agricultural attachés were instructed to not contact foreign government officials. As a result, the amount of information we obtained varies by country, and we were able to obtain only very limited information on antibiotic use in Brazil, China, Hong Kong, Japan, Mexico, Russia, and South Korea. We did not independently verify the information reported in responses to this questionnaire or other documents, including laws and regulations, from the foreign countries. To obtain information on antibiotic use in U.S. animal production, we reviewed FDA regulations; USDA’s National Animal Health Monitoring System reports on management practices, including antibiotic use practices in beef cattle and swine production; a University of Arkansas study of antibiotic use in broiler chickens; the Animal Health Institute’s annual reports on antibiotic use in animals; and a Union of Concerned Scientists report. We did not independently verify the information contained in these reports. In addition, we spoke with officials from state veterinarians’ offices and from agricultural industry organizations, including the American Veterinary Medicine Association, the National Pork Producers Council, the American Meat Institute, the National Cattlemen’s Beef Association, the U.S. Poultry and Egg Export Council, the National Chicken Council, and pharmaceutical and poultry companies. We also visited livestock and poultry farms in Georgia, Maryland, and Pennsylvania. We compared the United States’ policies regulating antibiotic use in animals with the policies of those key trading partners and competitors for which this information was available. In addition, we summarized available information on countries’ activities to address antibiotic resistance associated with antibiotic use in animals, and, for the United States, we developed a list of the antibiotics most commonly used in beef cattle, swine, and broiler chickens. To examine information that is available on the degree to which antibiotic use in animals has affected international trade, we reviewed reports on trade and food safety issues from USDA’s Economic Research Service and FAS, foreign governments, and international organizations. We also examined records of USDA’s Food Safety and Inspection Service to identify countries that have requirements concerning antibiotic use for the meat they import. In addition, we reviewed the reports and standards of international trade organizations, such as the World Trade Organization, the Codex Alimentarius Commission, and OIE. We discussed antibiotic use and other potential trade issues with officials from the Office of the U.S. Trade Representative, FAS, and meat industry trade associations. We also identified several studies on estimates of the potential economic impacts of restrictions on antibiotics used in meat production. These are described in detail in appendix II. We conducted our work from May 2003 through April 2004 in accordance with generally accepted government auditing standards. In this appendix we identify and summarize eight recent studies that provide estimates of the potential economic impacts of restrictions on antibiotics used in livestock production. Specifically, these studies estimate the economic effects of a partial and/or total ban of antibiotics used in animals. For several decades, antibiotics have been used for a variety of production management reasons, from therapeutic uses to increased productivity, such as feed efficiency or weight gain. In economic terms, higher productivity results in more final product supplied to the market, at a lower cost to consumers. Despite the use of a variety of economic models, assumptions about model parameters, and data sets, the economic impacts on consumers and producers of the studies that we identified were generally comparable. Overall, the studies conclude that a ban or partial ban on antibiotics in animal production would increase costs to producers, decrease production, and increase retail prices to consumers. For example, the studies indicate that the elimination of antibiotic use in pork production could increase costs to producers ranging from $2.76 to $6.05 per animal, which translates into increased consumer costs for pork ranging from $180 million per year to over $700 million per year. Table 2 summarizes the eight studies. While these market effects are important to both producers and consumers of livestock products, they must be balanced against the health care costs of antibiotic resistance due to agricultural uses of antibiotics. Potential health costs imposed by increased antibiotic resistance include more hospitalizations, higher mortality rates, and higher research costs to find new and more powerful drugs. From the point of view of proposals to reduce antibiotic use, these potential costs represent the benefits from reduced antibiotic use. These costs to society, however, are difficult to measure because of limited data on antibiotic use and resistance as well as the problematic nature of measuring the value of a human life. Moreover, while there are some estimates of the costs of antibiotic resistance from both medical and agricultural sources, no estimates exist that directly link the human health costs of antibiotic resistance with antibiotics used in animal production. Nevertheless, studies that have examined the costs of antibiotic resistance from all sources have found a wide range of estimates running into the millions and billions of dollars annually. For example, one recent study (2003) estimated that the health cost to society associated with resistance from only one antibiotic, amoxicillin, was $225 million per year. We discuss the eight studies we reviewed in reverse chronological order, from 2003 to 1999. Most examine restrictions on antibiotics in the swine industry, but a few look at the beef and poultry industries as well. All of the studies measure the economic impacts of antibiotic restrictions on domestic U.S. markets, except the WHO study of the antibiotic restrictions recently imposed by Denmark. Also, most studies estimate only domestic economic impacts, not impacts on international trade. In 2002, WHO convened an international expert panel to review, among other issues, the economic impact resulting from the Danish ban of antibiotics for growth promotion, particularly in swine and poultry production. As part of this effort, Denmark’s National Committee for Pigs estimated that the cost of removing antibiotic growth promoters in Denmark totaled about $1.04 per pig, or a 1 percent increase in total production costs. In the case of poultry, however, there was no net cost because the savings associated with not purchasing these antibiotics offset the cost associated with the reduction in feed efficiency. Components of these costs included excess mortality, excess feeding days, increased medication, and increased workload. A subsequent study by Jacobsen and Jensen (2003) used these costs as part of the agriculture sector of a general equilibrium model to estimate the impact on the Danish economy of the termination of antibiotic growth promoters. The model used these cost assumptions in a baseline scenario that projects the likely development of the Danish economy to 2010. The results of the model indicated a small reduction in pig production of about 1.4 percent per year and an increase in poultry production of about 0.4 percent. The authors explain that the increase in poultry production occurred because of the substitutability of these meats in consumption. In addition, this research included estimates of the consequences of removing antibiotic growth promoters on the export market. The model showed that exports of pork were forecast to be 1.7 percent lower than they would be in the absence of these growth promoters, while poultry exports would increase by about 0.5 percent. The authors explained that some costs associated with modifications to production systems were difficult to measure and were not included in the analysis, although they may have been substantial for some producers. They also stated that the analysis does not take into account the possible positive effect that the removal of antibiotic growth promoters may have had on consumer demand, both in the domestic and in the export markets. Moreover, they added that any costs must be set against the likely human health benefits to society. In 2003, using a 1999 study by Hayes et al. of the potential economic impacts of a U.S. ban based on the ban in Sweden, as described below, and a recent ban on feed-grade antibiotics in Denmark, Hayes and Jensen estimated the economic impacts of a similar ban in the United States. In 1998, the Danish government instituted a voluntary ban on the use of antibiotics in pork production at the finishing stage, and in 2000 it banned antibiotics for growth promotion at both the weaning and the finishing stages. The results of the ban in Denmark, however, may be more applicable than the Swedish experience because, like the United States, Denmark is one of the largest exporters of pork and has somewhat similar production practices. The authors compared the econometric results of a U.S. baseline without a ban with projected results based on assumptions taken from the ban in Denmark. Many of the same technical and economic assumptions that were used in the Swedish study were also used for the impacts based on the Danish ban. For instance, the authors included a sort- loss cost of $0.64 per animal, a similar assumption for loss of feed efficiency, and decreases in piglets per sow. Other key assumptions and features unique to the study include the following: the use of only one case or scenario—a “most-likely” scenario—unlike the study based on the Swedish ban; increased costs of $1.05 per animal at the finishing stage and $1.25 per animal at the weaning stage; a vaccine cost of $0.75 per animal; and a capital cost of about $0.55 per animal; According to the study, a major economic impact in the U.S. pork market of a ban similar to the Danish ban would be a cost increase of about $4.50 per animal in the first year. Across a 10-year period, the total cost to the U.S. pork industry was estimated to be more than $700 million. With a lower level of pork production, retail prices would increase by approximately 2 percent. The authors conclude that a ban at the finishing stage would create very few animal health concerns, while a ban at the weaning stage would create some serious animal health concerns and lead to a significant increase in mortality. They also note that, as happened immediately following the ban at the weaning stage in Denmark, the total use of antibiotics in the United States at this production stage may rise. Miller et al. (2003) used 1990 and 1995 National Animal Health Monitoring System (NAHMS) swine survey data to estimate the net benefit of antibiotics used for growth promotion to swine producers. The NAHMS database provides statistically valid estimates of key parameters related to the health, management, and productivity of swine operations in the United States. The authors used econometric methods to estimate the relationships between growth-promoting antibiotics and productivity measures, such as average daily weight gain (ADG) and feed conversion ratio (FCR), for grower/finisher pigs. Using these productivity measures, predictions on performance were then generated for an independent, medium-sized, midwestern farrow-to-finish pork producer in 1995. The performance figures were expressed in economic terms, such as profitability, using a swine enterprise budgeting model. The study includes the following key features and assumptions: The productivity measures estimated were ADG, FCR, and mortality rate (MR) during the grower/finisher stage of swine production. Explanatory variables included in the model were regional identifiers, size of operation, market structure variables, number of rations, mortality rate, number of days antibiotics were administered, number of antibiotics fed, number of diseases diagnosed in last 12 months, among others. The ADG and FCR equations were estimated jointly using the seemingly unrelated regression procedure. Because the theory as to an exact specification was unknown, the MR equation was estimated using a backward-stepwise linear regression. The authors estimated that increases in annual returns above costs from antibiotics for a 1,020-head finishing barn was $1,612, or $0.59 per swine marketed. This represents an improved profitability of approximately 9 percent of net returns in 2000 for Illinois swine finishing operations. The authors also found that there is substitutability between antibiotics as growth promoters and other production inputs (such as number of rations) that could reduce the negative influence of removing antibiotics. In an updated study, Miller et al. (2003) estimated the combined effects of antibiotics used for growth promotion (AGP) and antibiotics used for disease prevention (ADP) in pork production using the NAHMS 2000 swine survey. Specifically, the authors measured the productivity and the economic impacts of these antibiotics on grower/finisher pigs for individual swine producers. The authors evaluated four scenarios, using varying degrees of bans of both AGP and ADP: (1) a ban on AGP, (2) a ban on ADP, (3) a ban on both AGP and ADP, and (4) a limitation on AGP and ADP to levels that maximize production. These scenarios were chosen because antibiotics that are used for different purposes have different impacts on productivity, improving it on one dimension while possibly diminishing it on another. First, the authors estimated four pork productivity dimensions related to the use of antibiotics using an econometric model. Second, using the estimated productivity measures from the econometric model, they estimated economic impacts to pork producers for each antibiotic ban scenario using a spreadsheet farm budget model. The study includes the following key features and assumptions: Pork productivity was measured using four measures of productivity, including average daily weight gain, feed conversion ratio, mortality rate, and lightweight rate. These productivity measures were estimated using seemingly unrelated regression analysis and are modeled from the perspective of possible structural relationships among the measures. The study used the NAHMS 2000 study, which provides the most recent data available to investigate productivity impacts and impacts on farm costs and profitability. Overall, the authors confirmed their earlier findings that a ban would likely cause substantial short-term losses to producers. However, decreasing the use of certain antibiotics to a more desirable level may be implemented without major losses. For scenario 1, a total ban on AGP would cost producers $3,813 in profits annually. For scenario 2, a ban on ADP would slightly improve profits by a gain of $2703 annually. For scenario 3, a ban on both AGP and ADP would lower producer profits by $1128 annually. For scenario 4, where AGP and ADP are applied at levels where swine productivity is maximized, producers would gain $12,438 annually compared with no antibiotic use. The authors conclude that restrictions on classes of AGP, the amount of time antibiotics are fed, and restrictions on ADP many be implemented by producers without major losses. However, they also note that some time dimensions ignored in their study may be important and that their use of nonexperimental data requires careful interpretation. Brorsen et al. (2002) used a model similar to one developed by Wohlgenant (1993) to estimate the economic impacts on producers and consumers of a ban on antibiotics used for growth promotion in swine production. The authors used a model that allowed for feedback between beef and pork markets and measured changes in producer and consumer surplus resulting from shifts in both supply and demand. Moreover, the authors extended their two-commodity beef and pork model to include poultry. In their model, changes in production costs due to banning the use of antibiotics for growth promotion are measured indirectly by the net benefits from their use. The study includes the following key features and assumptions: The ban considered in this model is a complete ban on all antibiotics in feed. The effects of using antibiotics for growth promotion were assumed to be from improvements in (1) feed efficiency over drug cost, (2) reduced mortality rate, and (3) reduced sort-loss at marketing. The authors assumed a $45.00 per hundredweight market price for hogs. All parameters (i.e., demand and supply elasticities) used to solve the model were based on other economic studies, except the parameter that represented the change in production costs. Once these were obtained, retail quantity, retail price, farm quantity, and farm price were determined simultaneously. An econometric model was used to obtain the economic benefit from the improvement in feed-to-gain conversions in swine production. The mortality benefit in swine was assumed to range from 0 percent, to 0.75 percent (most likely), to 1.5 percent. Net benefits of the use of antibiotics for growth promotion were estimated by summing the results of a simulation exercise based on the probability distributions of the three sources of economic benefits at the industry level. The authors estimated that economic costs to swine producers from a ban on antibiotics used for growth promotion would range from $2.37 per hog to $3.11 per hog, with an average cost of $2.76 per hog. For swine producers, the estimated annual costs would be approximately $153.5 million in the short run to $62.4 million in the long run. Estimated annual costs to pork consumers would increase by about $89 million in the short run to $180 million in the long run. Mathews, Jr. (2002) examined the economic effects of a ban on antibiotic use in U.S. beef production using two policy alternatives—a partial ban and a full ban. To estimate these effects, the author developed a series of economic models, including a firm-level, cost-minimization model that minimizes the cost of feeding cattle to final output weights for a base case, a full ban, and a partial ban (banning only selected antibiotics) scenario. Imbedded in this model is a growth function that incorporates the interaction between the growth rate of cattle and feed efficiency. The firm- level effects were then aggregated across firms in a partial equilibrium framework to estimate national cattle supply, price, and value of production for the three scenarios. The study includes the following key features and assumptions: Variables included in the growth function were lagged average daily weight gain, feed efficiency, seasonal variables, and an interaction variable of average weight gain and feed efficiency. The growth model forms a “dynamic” link to the cost-minimization model by accounting for the impacts of recent feeding experiences. In the cost-minimization model, feed costs were minimized, subject to protein levels and other feed constraints. The model finds the minimum cost for feeding a steer to a final weight estimated from the embedded growth function. The resulting model allowed final cattle weights, feeding costs, and the number of cattle fed per year to vary, resulting in livestock supplies that are endogenous to the model. In the partial-ban scenario, substitute antibiotics were assumed to be functionally equivalent to and twice as costly as in the base scenario. Data for the aggregate analysis included annual average all-cattle prices and commercial beef production for the period 1975 through 1990. A base scenario was estimated using parameter and final steer weight estimates from the growth model for each quarter over an 11-year period, from January 1990 through January 2001. Results of the partial-ban scenario indicated that aggregate annual income would decrease by nearly $15 million for producers, while annual consumer costs would increase by $54.7 million. For the full ban, a 4.2 percent decline in beef production would yield a 3.32-percent increase in the price of cattle, from $42.60 to $47.12 per hundredweight. Also, the full ban translates into an annual consumer cost increase of $361 million. The author noted that the study did not take into account any effects of a ban or partial ban on trade in beef products. A study issued in 1999 by Hayes et al. at Iowa State University estimated the potential economic impacts of a ban on the use of antibiotics in U.S. pork production based on assumptions from a Swedish ban in 1986. To estimate baseline results, the authors used a simultaneous econometric framework of the U.S. pork industry that included several production and marketing segments: live inventory and production, meat supply, meat consumption, meat demand, and retail price transmission. The baseline results, or results with no change in antibiotic use, were compared to a range of estimates of a ban on antibiotics in pork production in the United States based on a set of technical and economic assumptions taken from the Swedish experience. These simulations included three different scenarios: a “most likely,” a “best-case,” and a “worst-case” scenario if the ban were to be implemented in the United States. The key features and assumptions of the model for the “most likely” case included the following: a 10-year projection period from 2000 to 2009 from a 1999 baseline, with deviations from the baseline in the projection period reflecting the technical and economic assumptions taken from the Swedish ban; the pork, beef, and poultry markets, although the model assumed no change in the regulation of antibiotics on beef and poultry; technical assumptions: feed efficiency for pigs from 50 to 250 pounds declines by 1.5 percent, piglet mortality increases by 1.5 percent, and mortality for finishing pigs increases by 0.04 percent. Also, the “most likely” case extends weaning age by 1 week and piglet per sow per year decrease by 4.82 percent. veterinary and therapeutic costs would increase by $0.25 per pig, net of the cost for feed additives; additional capital costs would be required because of additional space needed for longer weaning times and restricted feeding, including $115 per head for nursery space and $165 per head for finishing space; an estimated penalty of $0.64 per head for sort-loss costs; and input markets, such as the cost of antibiotics, are exogenous or not a part of the modeling system. The authors in their “most likely” scenario estimated that the effects of a ban on the use of antibiotics would increase production costs by $6.05 initially and $5.24 at the end of the 10-year period modeled. Because the supply of pork declines, however, net profit to farmers would decline by only $0.79 per head. Over a 10-year period, the net present value of forgone profits would be about $1.039 billion. For consumers, the retail price of pork increases by $0.05 per pound, which sums to a yearly cost of about $748 million for all consumers. The authors also cited four important limitations to their study: (1) the estimated impacts represent an “average” farm and may mask wide differences across farms; (2) technical evidence from the Swedish experience must be regarded with caution as an indicator of what might happen in the United States; (3) consumers only respond to changes in the price of pork; however, the model does not take into account how such a ban would affect the prices of beef and poultry; and (4) there was no attempt to factor in the positive effects of such a ban on consumer willingness to pay for pork produced without the use of feed-grade antibiotics. The National Research Council (NRC) examined the economic costs to consumers of the elimination of all subtherapeutic use of antibiotics in a chapter of a 1999 report entitled The Use of Drugs in Food Animals: Benefits and Risks. Instead of measuring the consequences of eliminating antibiotics on farm costs and profits, NRC decided that a more viable alternative would be to measure costs to consumers in terms of the higher prices that would be passed on to consumers. According to NRC, this measurement strategy was followed for several reasons: changes in production costs do not necessarily translate into lower profits; depending on management practices, not all producers rely on these antibiotics to the same extent and would not all be equally affected by a ban; and some producers, for example those who produce for special niche markets, may actually benefit from such a ban. The study includes the following key features and assumptions: All cost increases are passed on to consumers in terms of percentage price changes. The model measures how much consumers would need to spend in order to maintain a similar level of consumption as before the ban. No change in consumption because of a ban on antibiotics would occur. Per capita costs are estimated as the product of three items: (1) percentage increase in annual production costs, (2) retail prices, and (3) per capita annual retail quantity sold. Annual costs of a ban were estimated for four domestic retail markets— chicken, turkey, beef, and pork—as well as a total cost for all meat. NRC estimated that the average annual cost per capita to consumers of a ban on all antibiotic use was $4.84 to $9.72. On a commodity retail price basis, the change in price for poultry was lowest, from $0.013 per pound to $0.026 per pound; for pork and beef, prices ranged from $0.03 per pound to $0.06 per pound. Retail pork price increases ranged from $0.03 per pound to $0.06 per pound. Total national additional costs per year for pork consumption ranged from $382 million to $764 million, depending on assumptions about meat substitutes. As for all meat products combined, total consumer cost increases ranged from $1.2 billion to $2.5 billion per year. Finally, NRC noted that the reduction in profits and industry confidence that would result from such a ban may cause a reduction in research, and that society would lose the research benefits. Also, to determine whether this cost increase would be justified, the amount should be compared with the estimated health benefits. As part of the risk estimation outlined in Guidance for Industry #152, FDA developed a framework for evaluating the importance of an antibiotic to human medicine. FDA has ranked antibiotics as either critically important, highly important, or important. These rankings are based on five criteria, which are ranked from most (criterion 1) to least important (criterion 5): 1. The antibiotic is used to treat enteric pathogens that cause foodborne disease. 2. The antibiotic is the sole therapy or one of the few alternatives to treat serious human diseases or is an essential component among many antibiotics in the treatment of human disease. 3. The antibiotic is used to treat enteric pathogens in nonfoodborne disease. 4. The antibiotic has no cross-resistance within the drug class and an absence of linked resistance with other drug classes.5. There is difficulty in transmitting resistance elements within or across genera and species of organisms. Antibiotics that meet both of the first two criteria are considered by FDA to be critically important to human medicine. Drugs that meet either of the first two criteria are considered highly important to human medicine. Drugs that do not meet either of the first two criteria but do meet one or all of the final three criteria are considered important to human medicine. Of the 27 classes of animal drugs relevant to human health, 4 were ranked critically important, 18 highly important, and 5 important. The status of a particular antibiotic may change over time. For example, a drug may be considered to be critically important to human health because it is the sole therapy. Later, if new antibiotics become available to treat the same disease or diseases, the drug may be downgraded in its importance to human health. This appendix provides information on efforts to address antibiotic resistance associated with antibiotic use in animals for the United States and some of its key trading partners and competitors. For the United States, more detailed information on these activities is in the letter portion of this report. For the United States’ key trading partners and competitors, to the extent that information was available, we summarized the countries’ activities and described antibiotic resistance surveillance systems and antibiotic use data collection systems. In addition, table 3 presents information on the total amount of antibiotics sold or prescribed for use in animals for the United States and three trading partners and competitors for which this information was available. Specifically, it shows 2002 antibiotic sales data for the countries that we identified as having government data collection systems on antibiotic use with the available data. Although the United States does not have a government system, we included information collected by the Animal Health Institute for comparison. Total meat production is also shown to represent the size of the animal production industries in these countries. Overview of activities. In 1999, federal agencies formed the Interagency Task Force on Antimicrobial Resistance to address antibiotic resistance issues. In October 2003, FDA issued guidelines for assessing the safety of animal drugs (Guidance for Industry #152). FDA is conducting risk assessments of some antibiotics important in human medicine. Antibiotic-resistance surveillance systems. FDA, CDC, and USDA collect information on antibiotic-resistant bacteria in humans, retail meat, and animals through the National Antimicrobial Resistance Monitoring System (NARMS). Antibiotic use data collection systems. The Animal Health Institute, a trade association representing veterinary pharmaceutical companies, publishes the only publicly available data on the amount of antibiotics sold annually for use in animals. The Animal Health Institute collects these data from its member companies, which represent about 85 percent of the animal drug sales in the United States. The data show the amount of antibiotics sold by antibiotic class, but certain classes are reported together to abide by company disclosure agreements. See table 3 for information on the amount of antibiotics sold in the United States during 2002. In addition, the United States collects some on-farm data through USDA’s National Animal Health Monitoring System (NAHMS) and Collaboration in Animal Health, Food Safety, and Epidemiology (CAHFSE) programs. Overview of activities. In 1998, Australia established the Joint Expert Technical Advisory Committee on Antibiotic Resistance to provide independent expert scientific advice on the threat to human health of antibiotic-resistant bacteria caused by use in both animals and humans. Australia has begun to review the approved uses of antibiotics important in human medicine to determine if changes are needed. Australia’s review process includes performing a public health and an efficacy assessment. Like the United States’ risk assessment approach, Australia’s public health assessment considers the hazard, exposure, and potential impact of the continued use of the antibiotic on public health. The efficacy assessment considers whether the antibiotic is effective in animals for the purpose claimed and whether the label contains adequate instructions. As of April 2003, Australia had completed its assessment of virginiamycin, a member of the streptogramin class, and was considering a recommendation to ban its use for growth promotion. In addition, as of March 2003, Australia was assessing the risk of the macrolide antibiotic class, including tylosin. Antibiotic resistance surveillance systems. The committee’s 1999 report recommended establishing a comprehensive surveillance system to monitor antibiotic-resistant bacteria in animals. As of March 2003, a strategy for developing an antimicrobial resistance surveillance system was being completed. Antibiotic use data collection systems. Australia uses import data to monitor the annual quantity of antibiotics used in animals because all of the antibiotics used in the nation are imported. The data, which include information on the quantity of antibiotics imported by antibiotic class and end use, are not usually released publicly. A potential problem with this data collection method is that importers are not always able to anticipate how producers will use the antibiotic. Overview of activities. Like the United States, Canada plans to do risk assessments of antibiotics important in human medicine and to make changes in approved antibiotic uses as appropriate based on these risk assessments. Canadian officials expect to initially focus on growth promotion uses of several antibiotic classes and antibiotics, including penicillins, tetracyclines, tylosin, and virginiamycin. Canada plans to use risk assessment methods similar to those used in the United States; however, Canada may also consider other factors, such as the benefits associated with antibiotic use. In addition, Canada is considering the adoption of a prescription requirement for all antibiotic uses in animals except growth promotion. Antibiotic resistance surveillance systems. The Canadian Integrated Program for Antimicrobial Resistance Surveillance, started in 2002 and designed to use resistance surveillance methods consistent with the United States’ NARMS, collects information on antibiotic resistance from the farm to the retail levels. Canada issued the first annual report from this surveillance system in March 2004. Antibiotic use data collection systems. Canada is integrating the collection of data on antibiotic use in humans and animals into its surveillance system and plans to use this information to support risk analysis and policy development. Collection of on-farm data on antibiotic use in animals through pilot projects is ongoing, and collection of data from pharmaceutical companies, importers, and distributors, such as feed mills and veterinarians, is planned. Overview of activities. In 1999, an EU scientific committee on antibiotic resistance recommended that the growth promotion use of antibiotics from classes that are or may be used in human medicine be banned. Later that year, the EU completed action on this recommendation and banned the use of these antibiotics in feed for growth promotion. The scientific committee also recommended that the four remaining antibiotics used for growth promotion be replaced with other alternatives. In 2003, the EU issued a regulation adopting this recommendation, which banned the use of these antibiotics as of January 1, 2006. In addition, Denmark, an EU member, ended the use of all antibiotics for growth promotion in 2000. Antibiotic resistance surveillance systems. Most EU members have a program to monitor antibiotic resistance, but the EU as a whole does not have a harmonized system that allows comparison of data across nations. A November 2003 directive from the European Parliament and the Council of the European Union set forth general and specific requirements for monitoring antibiotic resistance. Among other things, member countries must ensure that the monitoring system includes a representative number of isolates of Salmonella spp., Campylobacter jejuni, and Campylobacter coli from cattle, swine, and poultry. In particular, Denmark’s surveillance system, the Danish Integrated Antimicrobial Resistance Monitoring and Research Programme, monitors resistance in these and other bacteria in animals, meat, and humans. Antibiotic use data collection systems. The EU has proposed that its members collect data on antibiotic use in animals. EU countries’ efforts to collect this information are at varying stages of development. For example, while some EU countries are just developing programs to collect antibiotic use data, the United Kingdom and Denmark currently collect this information. The United Kingdom’s Veterinary Medicines Directorate collects data from veterinary pharmaceutical companies on the amounts of different antibiotics and other animal drugs sold in the United Kingdom. The directorate then separates these data into chemical groups, administration methods, and target species. For certain antibiotics that are sold for use in more than one species, it is not possible to determine the species in which they were used. However, the directorate is working to more accurately assign sales quantities to each species. See table 3 for information on the amount of antibiotics sold in the United Kingdom during 2002. Denmark collects extensive data on the use of antibiotics in animals. In particular, through its VetStat program, Danish officials can obtain data on all medicines prescribed by veterinarians for use in animals. This program provides detailed information on antibiotic use, such as the quantity used, class of antibiotic used, species, age of animal, and the purpose of use, as well as the disease the antibiotic was used to treat. In addition, VetStat allows researchers to calculate the average daily doses that animals receive of various antibiotics. See table 3 for information on the amount of antibiotics used in Denmark during 2002. Antibiotic resistance surveillance systems. Hong Kong has an antibiotic resistance surveillance system. We did not obtain additional information on this system. Antibiotic use data collection systems. Hong Kong has an antibiotic use data collection system. We did not obtain additional information on this system. Overview of activities. Japan is currently reviewing the use of antibiotics for growth promotion if those antibiotics are from classes used in humans. According to an April 2004 report from the Office of the U.S. Trade Representative, the Japanese government has stated that these reviews will be based on science. Antibiotic resistance surveillance systems. Japan has an antibiotic resistance surveillance system. We did not obtain additional information on this system. Antibiotic use data collection systems. Japan has an antibiotic use data collection system. We did not obtain additional information on this system. Antibiotic resistance surveillance systems. In 2000 and 2001, FDA undertook a pilot study with Mexico to monitor the antimicrobial resistance of salmonella and E. coli isolates obtained from human samples. In September 2001, the pilot study was expanded into a 3-year cooperative agreement to include both human and animal monitoring. The primary objective of the agreement was to establish an antimicrobial resistance monitoring system for foodborne pathogens in Mexico comparable to the United States’ NARMS program. Overview of activities. New Zealand established an Antibiotic Resistance Steering Group primarily to coordinate a program to gather and analyze information on the use of antibiotics in feed (including antibiotics for growth promotion), assist in developing a policy concerning this use, and assess the potential transfer of resistant bacteria from animals to humans. New Zealand has completed its risk assessments of antibiotics for growth promotion and no longer allows the growth promotion use of any antibiotics that are also related to antibiotics used in human medicine. New Zealand did not carry out a comprehensive risk analysis for any of the antibiotics being used for growth promotion because the available information was not sufficient. Instead, New Zealand used consistent rationale, including the mechanisms and potential for antibiotic resistance and the potential for that resistance to be transferred from animals to humans, in assessing each antibiotic (or class, such as the macrolide class). Antibiotic resistance surveillance systems. New Zealand is working to implement a comprehensive antibiotic resistance surveillance program. According to a January 2003 antibiotic resistance progress report, New Zealand has programs to monitor specific pathogens in animals, but the programs do not gather information specific to antibiotic resistance. While the government informally monitors the antibiotic resistance of E. coli and Staphylococcus aureus, the program provides very limited data. Antibiotic use data collection systems. Since 2001, New Zealand has collected antibiotic sales data from a formal survey of pharmaceutical companies. The companies report the data voluntarily. Annual reports provide antibiotics sales statistics by antibiotic class, method of administration, type of use (including growth promotion), and animal species. The data are only indicative of use because antibiotics are used for multiple purposes, and it is impossible to know the exact use of all the antibiotics. New Zealand has considered changes to its data collection system to provide additional information. See table 3 for information on the amount of antibiotics sold in New Zealand during 2002. Antibiotic use data collection systems. The Korea Animal Health Products Association, an industry group, monitors the quantity of antibiotics produced and sold by its members. The data are available on a monthly basis and, at a minimum, provide total antibiotic use quantities by species, specific antibiotic, and antibiotic class. This appendix provides information available on the antibiotics that are frequently used on farms that produce feedlot cattle, swine, and broiler chickens in the United States. In 1999, USDA’s National Animal Health Monitoring System (NAHMS) collected data on antibiotic use in beef cattle raised in feedlots. Table 4 lists the antibiotics identified as having at least 10 percent of feedlots using them in feed or water, or by injection, and the most frequent purpose of use, when this information is available. NAHMS provided only limited information on how the antibiotics were administered, so this information is not included in the table. The table also presents information on FDA’s rankings of the importance of the antibiotic class in human medicine. (See app. III for further information on FDA’s ranking system.) For those antibiotics not found in these rankings, we listed them as not important. In particular, over half of the feedlots surveyed used chlortetracycline in feed or water, about one-third used tilmicosin to prevent disease, and over half used tilmicosin, florfenicol, and tetracyclines to treat disease. In addition, about one-third used cephalosporins, fluoroquinolones, and penicillins/amoxicillin to treat disease. However, the feedlots using these antibiotics do not administer them to all cattle. For example, although 42 percent of feedlots use antibiotics to prevent respiratory disease, only 10 percent of feedlot cattle receive antibiotics for this purpose. In 2000, NAHMS collected data on antibiotic use in swine. Table 5 lists the antibiotics identified as those that at least 10 percent of producers use in feed or water or by injection for either nursery-age or older swine, the most frequent method of administration for these antibiotics, and the most frequent purpose of use. The table also presents information on FDA’s ranking of the importance of the antibiotic class in human medicine. For those antibiotics not found in these rankings, we listed them as not important. In particular, about half of the producers surveyed used tylosin and chlortetracycline in feed. In addition, about one-third of the producers surveyed used a penicillin to treat disease and bacitracin to promote growth. However, the producers using these antibiotics do not administer them to all of their swine. USDA has not collected any data on antibiotic use in broiler chickens through NAHMS. However, a University of Arkansas study used data from a corporate database to track patterns of antibiotic use in broiler chickens from 1995 through 2000. This study focused on the use of antibiotics in feed to promote growth and to prevent disease. Over the period of the study, the percentage of production units using antibiotics in feed decreased, in part because antibiotics did not prove to be as cost-effective as other feed additives that promote growth. The study did not analyze data on antibiotics used in chickens for disease treatment. According to industry officials, producers seldom treat chickens for diseases. Table 6 lists the antibiotics identified by the study as being used by at least 10 percent of broiler production units and their purpose of use. Table 6 also presents information on FDA’s ranking of the importance of the antibiotic class in human medicine. For those antibiotics not found in these rankings, we listed them as not important. The following are our comments on the USDA letter, dated April 5, 2004. 1. We revised the report to include USDA’s concerns that additional funding would be needed to expand CAHFSE. 2. We clarified our discussion of some studies. References are cited in footnotes for studies discussed in the report. 3. We agree that the collection of both aggregate and detailed data on antibiotic use in animals is useful and that researchers need to know specifically how antibiotics are used in order to determine which of the uses is responsible for trends in antibiotic resistance. The report discusses both aggregate and detailed data. As USDA states, the report highlights the CAHFSE program, through which USDA is collecting specific, on-farm data on swine. In addition, the report discusses Denmark’s system, which collects detailed data on how antibiotics are used in animals. 4. We found that only a few studies have concluded that the risk is minimal, while many studies have concluded that there is a significant human health risk from the transference. 5. We revised the report to include information on research funded by USDA’s Cooperative State Research, Education, and Extension Service. 6. We clarified the report to reflect comments on specific studies. In addition, we clarified the report to indicate which results were from epidemiologic studies alone, and which results were from epidemiologic studies that included molecular subtyping techniques. In addition to those named above, Gary Brown, Diane Berry Caves, Diana Cheng, Barbara El Osta, Ernie Jackson, Julian Klazkin, Carolyn Feis Korman, Deborah J. Miller, Sudip Mukherjee, Lynn Musser, Roseanne Price, and Carol Herrnstadt Shulman made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Antibiotic resistance is a growing public health concern; antibiotics used in animals raised for human consumption contributes to this problem. Three federal agencies address this issue--the Department of Health and Human Services' (HHS) Food and Drug Administration (FDA) and Centers for Disease Control and Prevention (CDC), and the Department of Agriculture (USDA). GAO examined (1) scientific evidence on the transference of antibiotic resistance from animals to humans and extent of potential harm to human health, (2) agencies' efforts to assess and address these risks, (3) the types of data needed to support research on these risks and extent to which the agencies collect these data, (4) use of antibiotics in animals in the United States compared with its key agricultural trading partners and competitors, and (5) information on how use has affected trade. Scientific evidence has shown that certain bacteria that are resistant to antibiotics are transferred from animals to humans through the consumption or handling of meat that contains antibiotic-resistant bacteria. However, researchers disagree about the extent of harm to human health from this transference. Many studies have found that the use of antibiotics in animals poses significant risks for human health, but a small number of studies contend that the health risks of the transference are minimal. Federal agencies have expanded their efforts to assess the extent of antibiotic resistance, but the effectiveness of their efforts to reduce human health risk is not yet known. FDA, CDC, and USDA have increased their surveillance activities related to antibiotic resistance. In addition, FDA has taken administrative action to prohibit the use of a fluroquinolone in poultry. FDA has identified animal drugs that are critically important for human health and begun reviewing currently approved drugs using a risk assessment framework that it recently issued for determining the human health risks of animal antibiotics. However, because FDA's initial reviews of approved animal drugs using this framework have focused on other drugs and have taken at least 2 years, FDA's reviews of critically important drugs may not be completed for some time. Although federal agencies have made some progress in monitoring antibiotic resistance, they lack important data on antibiotic use in animals to support research on human health risks. These data, such as the type and quantity of antibiotics and purpose for their use by species, are needed to determine the linkages between antibiotic use in animals and emerging resistant bacteria. In addition, these data can help assess human health risks from this use and develop and evaluate strategies for mitigating resistance. The United States and several of its key agricultural trading partners and competitors differ in their use of antibiotics in animals in two important areas: the specific antibiotics allowed for growth promotion and availability of antibiotics to producers (by prescription or over the counter). For example, the United States and Canada allow some antibiotics important in human medicine to be used for growth promotion, but the European Union (EU) and New Zealand do not. Regarding over the counter sales of antibiotics, the United States is generally less restrictive than the EU. Antibiotic use in animals has not yet been a significant factor affecting U.S. international trade in meat and poultry, although the presence of antibiotic residues in meat has had some impact, according to government and industry officials. Instead, countries raise other food safety issues, such as hormone use and animal diseases. However, according to these officials, antibiotic use in animals may emerge as a factor in the future. They particularly noted that the EU could object to U.S. use of antibiotics for growth promotion as its member countries are phasing out that use.
Delivering more than 210 billion pieces of mail each year, USPS has a mission vital to the nation’s communications and commerce. To meet its statutory universal service obligation, which requires it to “serve as nearly as practicable the entire population of the United States,” USPS must “provide prompt, reliable, and efficient services to patrons in all areas” and “render postal services to all communities.” In selecting modes of transportation, USPS is required to “give highest consideration to the prompt and economical delivery of all mail.” Although USPS is authorized by law to receive appropriations for reimbursement of public service costs incurred by it in providing a maximum degree of effective and regular postal service nationwide, in communities where post offices may not be deemed self-sustaining, USPS has neither requested nor received such appropriations since 1982. USPS receives only minimal appropriations for reimbursement for providing free mail for the blind and overseas voting, which USPS refers to as “revenue foregone,” that, in fiscal year 2007, represented less than 0.2 percent of its total revenues. USPS generated 99.8 percent of its total revenues from products and services, with mail revenues accounting for the vast majority (94.8 percent of total revenues). However, USPS faces an increasingly competitive environment. As some communications and payments have migrated to electronic alternatives, including the Internet, First-Class Mail, which historically has covered most overhead costs, has declined in volume, and more declines are expected. According to USPS, “The projected decline of First-Class Mail impacts the Postal Service’s ability to continue to finance the growing universal service network. This is the single greatest challenge facing the Postal Service.” Although Standard Mail (primarily advertising) is USPS’s largest class of mail and key growth product, it is more price sensitive. Standard Mail volume has recently declined in the wake of postal rate increases and the economic downturn, and its future prospects are unclear as advertising expenditures continue to shift to the Internet. In this regard, a joint USPS-mailer work group recently reported that “Standard Mail must be delivered in a timely and consistent manner to the end customer according to published standards, in order to remain a viable growth product for its users and the Postal Service, and to remain competitive with alternative advertising media.” Standard Mail growth will be critical to offset rising costs, primarily rising compensation and benefits costs that have consistently represented nearly 80 percent of USPS’s expenses. USPS has restrained cost growth in recent years, in part through automation and other productivity initiatives that helped reduce the number of career employees from a peak of nearly 800,000 in September 1999 to fewer than 670,000 in September 2007. However, as USPS has recognized, continued productivity gains are needed in the face of the changing mail mix, sustained and evolving competition, and a challenging economic environment. USPS has recognized that given its workforce costs, continued work hour reductions are necessary to achieve productivity gains. The 2006 postal reform act generally limits rate increases for most mail to an inflationary price cap. The reform act also abolished the statutory mandate to break even financially over time. As a result, USPS generally cannot address financial losses with above-inflation rate increases, which underscores the need to remain financially viable by sufficiently growing revenues, restraining costs, or both. However, USPS recently reported that fiscal year 2008 revenues have not been covering costs, which have grown faster than the price cap. The PFP program includes quantitative corporate and unit indicators of performance and individual performance elements, both of which are used to rate PFP participants. According to USPS, the PFP program places emphasis on performance indicators that are objective and measurable. To this end, target levels of performance, expressed in quantitative terms, are established for the corporate and unit indicators, and PFP participants receive higher ratings as higher targets are achieved. In fiscal year 2008, 12 corporate indicators apply to all PFP participants, including measures of timely mail delivery, productivity, revenue, and net income, among other things. A total of 53 unit indicators apply to selected groups of participants, such as groups of postmasters and managers at various mail processing facilities, depending on their responsibilities and spans of control. Some unit indicators apply to most participants, such as the indicator of total operating expenses. Other indicators apply to relatively few participants, such as indicators of international mail delivery, which apply exclusively to managers at USPS International Service Centers. Besides being rated on results for corporate and unit indicators, each PFP participant is rated on individual performance elements that vary depending on the participant group and, within some groups, are tailored to each participant. Some individual performance elements have target levels of performance defined by narrative standards that are centrally established by USPS. For example, EAS postmasters have two individual performance elements that are defined by narrative standards: (1) fiscal management and (2) leadership and communication. Alternatively, other individual performance elements may be selected from a predefined list and then defined more specifically with target performance levels, based on a discussion that involves the participant and the participant’s rater. For example, some individual performance elements for a field operations manager must be selected from a list, which includes, among other things, operational productivity, the rate of scanning barcodes on mail pieces, and overtime usage. If an individual performance element involving operational productivity is selected, it is then defined with target performance levels for specific mail processing, delivery, maintenance, and customer service operations, depending on the responsibilities of the field operations manager. Corporate and unit indicators are weighted to reflect organizational priorities. More heavily weighted indicators play a larger role in determining the overall PFP rating, while less heavily weighted indicators play a smaller role. To the extent that indicator weights vary—which can be substantial, depending on the indicator and the participant’s position— the indicator makes a different contribution to the overall PFP rating and the resulting salary adjustments and any lump sum awards. USPS establishes 15 target performance levels for each corporate and unit indicator. As more challenging targets (i.e., higher levels of performance) are reached, the indicator increases the overall PFP rating and the associated PFP award. Thus, indicator targets create incentives for PFP participants to maximize results for each indicator. Targets for some indicators are based on actual results achieved for the current fiscal year (e.g., the percentage of a specified type of mail delivered on time), while others are based on year-to-year improvement (e.g., the reduction in formal equal employment opportunity complaints). In some cases, targets are based on the USPS budget. For example, unit indicator targets are defined for total operating expenses relative to the final budget. To the extent that operating expenses are reduced below the budgeted level, higher target levels are achieved. These targets can be adjusted by various levels of management throughout the fiscal year, depending on numerous factors, such as changes in USPS’s overall financial condition, increases in fuel prices, changes in local mailing volumes, and unexpected local expenses, among other things. Corporate and unit indicators are measured against targets at various levels of geographic aggregation, depending on the indicator and the participant’s group. For example, some corporate indicators are measured at the national level, such as indicators of productivity, revenue, and net income. Other indicators are measured at different geographic levels. For example, for a postmaster of a small post office, the unit’s total operating expense indicator is defined as the total expenses of that post office. For a district executive, the unit operating expense indicator is defined as the total expenses of the entire district. In some instances, USPS permits “mitigation” adjustments to the data used to measure achievement against targets. Some individual mitigation adjustments are intended to take into account events that are outside the control of the participant, such as a fire that results in the temporary suspension of a post office’s operations. Other mitigation adjustments are processed in batches for multiple units and participants, such as adjustments that were made after postal operations were disrupted by Hurricane Katrina. USPS has established a structured process for administering the PFP program. Each participant is assigned a rater, who is generally the participant’s immediate supervisor. At the beginning of the fiscal year, the rater is required to discuss PFP indicators and targets with the participant, including goals for corporate and unit indicators and individual performance elements. During the year, a midyear PFP review is used for the participant to record accomplishments to date, and the rater meets with the participant to review progress toward PFP targets. At the end of the year, the participant records accomplishments, and the rater meets with the participant and rates the participant on individual performance elements. USPS then calculates the overall PFP rating for each participant based on the results of corporate and unit indicators and ratings for individual performance elements; this rating is used to determine adjustments to the participant’s salary and any lump sum award. The overall PFP rating is used to determine salary increases and any lump sum awards based on separate schedules that apply to EAS and PCES participants. First, for each participant, an overall rating is calculated based on the weighted outcomes for corporate and unit indicators and individual performance elements. Since each indicator and individual performance element produces an outcome ranging from 1 to 15, the overall rating also ranges from 1 to 15. The rating is rounded to the nearest whole number for the purpose of determining the PFP award. For EAS participants, all PFP awards are in the form of percentage increases to their salaries. For fiscal year 2008, the PFP award can range from 0 to 12 percent of the EAS participant’s salary, as shown in figure 1. For PCES executives, PFP awards take the form of salary increases and lump sum awards. Salary increases depend on the overall PFP rating, as well as each executive’s current salary relative to the maximum of his or her salary range, as shown in table 1. However, no salary increases are converted to lump sum awards, as they may be for EAS participants. In addition to a salary increase, a PCES executive may receive a PFP lump sum award that is based on his or her overall rating. This lump sum award is paid as a percentage of the executive’s salary, as shown in table 2, for individuals with an overall rating of 4 and above, which is considered to be a minimum threshold for a lump sum award. Average PFP awards as a percentage of salary for EAS and other non- PCES participants are shown in figure 2, from fiscal year 2004—the first year of the current PFP program for EAS participants—through fiscal year 2007. Average PFP awards for PCES participants are shown in figure 3, from fiscal year 2003—the first year of the current PFP program for PCES participants—through fiscal year 2007. Overall PFP ratings primarily depend on results for corporate and unit indicators related to USPS’s strategic goals of increasing efficiency, improving service, and generating revenue. Collectively, these indicators are weighted so that they account for two-thirds (66 percent) of the PFP rating for the average PFP participant in fiscal year 2008 (see fig. 4). Figure 4 shows that for fiscal year 2008, results for efficiency-related indicators, which are corporate and unit indicators, such as USPS’s overall productivity and total unit expenses, make up 27 percent of the PFP rating for the average participant. Results for service-related indicators, such as corporate and unit indicators for timely delivery of different types of mail, represent 22 percent of the average rating. Results for corporate and unit revenue-generation indicators, such as national and unit revenues, account for 17 percent of the average rating. An additional 10 percent of the rating consists of results for corporate and unit indicators related to USPS’s strategic goal of creating a more customer-focused culture. The remaining 24 percent of the rating reflects the results for individual performance elements, such as oral communication and other quantitative indicators, some of which were tailored to the individual. USPS officials have stated that indicators are weighted to reflect their relative importance to accomplishing USPS’s strategic goals, as well as their applicability to individual positions based on the individual’s responsibilities and span of control. According to USPS, the PFP program thereby recognizes and rewards individual performance that improves corporate and unit performance, particularly in high-priority areas. Consistent with this approach, some indicators are more heavily weighted than others. Among efficiency-related indicators, two indicators make the largest contribution to the overall PFP rating: total unit expenses (16 percent of the overall rating) and national productivity (5.6 percent of the rating) (see fig. 5). The 22 other efficiency-related indicators account for 5 percent of the overall rating, in part because some of these indicators measure results for specific USPS operations and, thus, are applicable to relatively few PFP participants. However, these indicators can have a significant weight for the participants they apply to. Among service-related indicators, the 13 indicators measuring timely delivery of the various mail types account for 16.4 percent of the overall rating. The 10 other service- related indicators account for 5.4 percent of the rating. Among revenue- generation indicators, the two most heavily weighted indicators are unit retail revenue (e.g., revenue from individual pieces of mail deposited at a post office), which represents 7.7 percent of the overall rating, and national revenue, which represents 5.7 percent of the rating. Five other revenue-generation indicators account for 3.9 percent of the overall rating. The weight of PFP indicators varies considerably by participant group, based on the responsibilities and spans of control of various managerial and executive positions. For example, for the 14,754 full-time postmasters in EAS levels 11 through 16, who generally head small post offices, 33 percent of the overall PFP rating is based on the total unit expenses indicator (see fig. 6). In contrast, this indicator accounts for 12 percent of the rating for the 2,365 postmasters in EAS levels 21 through 26 (see fig. 7), who generally head larger post offices. The overall rating of postmasters in EAS levels 21 through 26 is more dependent on a variety of other indicators related to efficiency, timely mail delivery, and revenue generation. Additional examples of how indicator weights vary for participants in different positions include the following: The retail revenues indicator is most heavily weighted for upper-level EAS postmasters. This indicator accounts for 35 percent of the overall PFP rating for the 6,853 postmasters in EAS levels 18 through 20 and 28 percent of the rating for the 2,365 postmasters in EAS levels 21 through 26; it makes up 5.5 percent of the rating for the 14,754 postmasters in EAS levels 11 through 16 and does not factor into the overall PFP rating for the 1,126 part-time EAS postmasters of small post offices (i.e., Cost Ascertainment Grouping levels A through E). To put the use of this indicator into context, USPS is looking to generate revenues through postmaster and other employee outreach to households and small businesses and has multiple programs for outreach to small business customers to promote the convenience and value of postal services. Three indicators related to equal employment opportunity (EEO) account for 35 percent of the overall PFP rating for the 167 managers with responsibilities in this area. These indicators measure outcomes of EEO complaints, including the percentage of informal complaints that become formal complaints, the number of formal complaints, and the processing time for complaints that are mediated. These indicators support USPS’s emphasis on improving EEO processes and processing EEO complaints in a timely manner, and USPS classified these indicators as related to its strategic goal of creating a more customer-focused culture. USPS has provided training to supervisors and managers on the importance of EEO, open communication, and the benefits of resolving complaints at the lowest possible level. Various unit indicators apply to the 13,458 EAS field managers who work in the mail processing area, such as indicators of the efficient use and maintenance of mail processing equipment. These indicators support USPS’s efforts to improve efficiency and service, and for some field managers, represent 21 percent of their rating. Other mail processing indicators measure the scanning of barcodes on mail containers and equipment used in mail processing operations—an activity that is critical to USPS’s efforts to track mail, thereby improving service and efficiency. As USPS implements the postal reform law’s requirements for measuring and reporting its delivery performance for all market-dominant products, which collectively make up nearly 99 percent of mail volume, USPS will have opportunities to incorporate new indicators into its PFP program, notably for Standard Mail and bulk First-Class Mail. PFP indicators of timely delivery apply to only some types of mail because, as we reported in July 2006, USPS measures timely delivery for less than one-fifth of mail volume, with no representative measures for Standard Mail (48.8 percent of volume), bulk First-Class Mail (25.3 percent of volume), Periodicals (4.1 percent of volume), and most types of Package Services (0.5 percent of volume). However, in December 2006, Congress enacted postal reform legislation that requires USPS to measure and report to the Postal Regulatory Commission on the delivery performance of market-dominant products, which include mail such as Standard Mail, bulk and single-piece First-Class Mail, and Periodicals. USPS is in the process of implementing new delivery performance measurement systems for market-dominant mail types that are not currently being measured—such as Standard Mail, bulk First-Class Mail, and Periodicals. Together, these three mail types constitute 78 percent of mail volume, including 49 percent for Standard Mail, 25 percent for bulk First-Class Mail, and 4 percent for Periodicals. USPS has recognized that the successful implementation of these new measurement systems will depend, in part, on mailers’ barcoding mail and containers, as well as providing electronic information on mailings. USPS expects these activities to become more widespread over the next several years. Once such systems are fully implemented and mailers’ participation is sufficient to generate representative data, USPS will have the opportunity to incorporate new delivery performance indicators into its PFP program. Such action would be consistent with the approach USPS has taken in recent years to incorporate new performance indicators into its PFP program. In addition, the External First-Class Measurement System (EXFC), which is incorporated into the PFP program to measure the timely delivery of single-piece First-Class Mail, has not been a systemwide indicator for this type of mail, in part because EXFC has measured delivery performance for mail deposited in collection boxes only in selected areas of the country. USPS is expanding EXFC coverage to include nearly all geographic areas. According to a senior USPS official, as EXFC coverage is expanded in fiscal year 2008, the additional data are being incorporated into the fiscal year 2008 indicators for single-piece First-Class Mail. This development is consistent with USPS’s actions in the past to implement delivery performance measurement systems for Parcel Select and some types of International Mail, establish targets, identify opportunities to improve service, and to incorporate the measurement data into the PFP program to hold managers accountable for results. These actions have been credited with improving timely delivery performance for these types of mail, both of which operate in a highly competitive marketplace. To put these developments into context, in 2006, USPS said that its goal of improving service—which continues to be one of its primary goals—is supported by a “balanced scorecard” that uses service performance metrics for the mail that is measured to support personal and unit accountability. USPS noted that goals for these metrics—which include delivery performance indicators, as well as operational indicators that USPS said are critical to on-time service performance—were incorporated into the PFP program. We have agreed with USPS’s focus on improving service and holding its managers accountable for results but noted in 2006 that USPS has not yet achieved its aim of a “balanced scorecard” for delivery performance because its delivery performance indicators cover less than one-fifth of mail volume, and these indicators do not cover Standard Mail, bulk First-Class Mail, Periodicals, and most Package Services mail. We observed that this gap in coverage has impeded USPS’s potential for holding its managers accountable for the delivery performance of all types of mail and for balancing increasing financial pressures with the need to maintain quality delivery service. In 2007, the Chairman of the USPS Board of Governors noted in congressional testimony, “To improve service, we need better metrics on performance. As George Mason University President Alan Merten says, ‘What gets measured gets better.’” The delivery performance indicators that USPS has implemented and incorporated into PFP incentives have been credited with stimulating improved service. For example, USPS created delivery standards and indicators for Parcel Select service in 1999, which it then incorporated into PFP incentives. In September 2007, the Deputy Postmaster General cited USPS’s delivery performance for Parcel Select as an example of substantial improvement resulting from measuring and building results into its PFP program, thereby holding managers accountable. To fulfill its mission of providing universal postal service, USPS is required to provide prompt mail delivery throughout the nation. USPS can help improve delivery service by incorporating new delivery performance indicators for market-dominant products that represent most mail volume into its PFP program. Incorporating new delivery indicators would hold postal managers accountable for results. We recognize that incorporating such indicators would depend on successful implementation of the new measurement systems—which will depend not only on USPS but also on mailers, who must barcode the mail and provide necessary information in electronic format, among other things. It will take time to implement new delivery performance measurement systems at a level that permits meaningful performance measurement and incorporation into the PFP program. Thus, over time, USPS will have an opportunity to incorporate new delivery performance indicators into its PFP program—such as indicators of timely delivery for Standard Mail and bulk First-Class Mail— to produce a more balanced scorecard of PFP indicators. As USPS has recognized, what gets measured gets better, and PFP indicators help drive performance improvement. We are making one recommendation that the Postmaster General incorporate new delivery performance indicators into the PFP program— such as indicators that cover Standard Mail and bulk First-Class Mail— once the necessary measurement systems are successfully implemented, including the actions that mailers must take to permit meaningful performance measurement. USPS provided written comments on a draft of this report in a letter dated August 4, 2008, from the Senior Vice President of Operations and the Vice President of Employee Resource Management. USPS’s comments are summarized below and the letter is reproduced in appendix III. In separate correspondence, USPS also provided technical comments, which we incorporated, as appropriate. USPS concurred with our recommendation and said it was committed to incorporating new delivery performance measures into its PFP program. USPS noted that in its June 2008 response to Congress regarding the Postal Accountability and Enhancement Act, USPS identified implementing expanded measurement systems for single-piece First-Class Mail, new systems for bulk First-Class Mail, Standard Mail, Periodicals, and bulk Package Services mail and stated that implementation of these systems will continue through fiscal year 2009. USPS agreed with our draft report that successful implementation of new measurement systems will depend, in part, on mailers barcoding mail and containers, as well as providing electronic information on mailings. USPS said that in addition to expanding measurement systems for its market-dominant products during fiscal year 2009, it will also develop historical data to assist with the creation of future performance targets. USPS also provided comments on its PFP program, stating that the program’s approach has been responsible for substantial performance improvements and is consistent with past efforts to ensure the proper balance of performance indicators. We are sending copies of this report to the Chairman of the Senate Committee on Homeland Security and Governmental Affairs; the Ranking Minority Member of the Subcommittee on Federal Financial Management, Government Information, Federal Services, and International Security, Senate Committee on Homeland Security and Governmental Affairs; the Chairman and Ranking Minority Member of the House Committee on Oversight and Government Reform; the Chairman and Ranking Minority Member of the Subcommittee on Federal Workforce, Postal Service, and the District of Columbia, House Committee on Oversight and Government Reform; the Chairman of the USPS Board of Governors; the Postmaster General; the USPS Inspector General; and other interested parties. We also will provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at herrp@gao.gov or (202) 512-2834. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. U.S. Postal Service (USPS) employee type Headquarters (HQ) and HQ-related EAS employees Attorneys on the Attorney Compensation Schedule Other HQ and HQ-related EAS Postal Career Executive Service (PCES) PCES field executives (including PCES Postmasters) Our objectives were to (1) describe the key features of the U.S. Postal Service’s (USPS) pay for performance (PFP) program, (2) provide information on the weight of the PFP program’s performance indicators in determining participants’ ratings, and (3) assess opportunities for USPS to incorporate new indicators of delivery performance into its PFP program. To address these objectives, we obtained documentation from USPS on its PFP program and interviewed USPS officials responsible for the program. To assess opportunities for USPS to incorporate new delivery performance indicators into its PFP program, we also obtained documentation on USPS’s plans to implement new delivery performance measurement systems. We primarily based our assessment on applicable laws—such as laws related to USPS’s statutory mission of providing prompt, reliable, and efficient postal services to patrons in all areas at reasonable rates and statutory reporting requirements related to USPS’s delivery performance—as well as on interviews with senior USPS officials. We also developed assessment criteria from our past work on other agencies’ PFP programs and best practices used by high-performing organizations. We conducted a data reliability assessment of USPS’s PFP information and determined that the information was sufficiently reliable for the purposes of our report. Our assessment was based on a review of the documentation and data provided, comparing the consistency of information provided by multiple sources and in multiple data files; interviews with USPS officials to discuss the documentation; and data, including how the data were developed; and follow-up questions to obtain further information. We conducted this performance audit from October 2007 to September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the above individuals, Gerald P. Barnes (Assistant Director), Elizabeth Eisenstadt, Brandon Haller, David Hooper, Kenneth E. John, Belva Martin, Laura Shumway, and Crystal Wesco made key contributions to this report.
The U.S. Postal Service (USPS) pay for performance (PFP) program for managers includes quantitative performance indicators. PFP ratings are the basis for salary increases and lump sum awards for nearly 750 Postal Career Executive Service (PCES) executives and about 71,700 other participants, mostly Executive and Administrative Schedule (EAS) employees. GAO was requested to provide information about USPS's PFP system. This report (1) describes the key features of USPS's PFP system, (2) provides information on the weight of its performance indicators in determining PFP ratings, and (3) identifies opportunities for USPS to incorporate delivery performance indicators into its PFP system. GAO obtained USPS documents and data, interviewed USPS officials, and primarily based its assessment on laws related to timely delivery and interviews with senior USPS officials. Key features of the PFP program are quantitative corporate and unit indicators of performance and individual performance elements that are used to rate participants and provide the basis for awards. Quantitative performance targets are established for corporate and unit indicators. Corporate indicators apply to all participants and include measures of timely delivery, productivity, revenue, and net income, among others. Unit indicators apply to selected groups of participants and vary according to the groups' responsibilities and span of control. Individual performance elements are tailored to the participant group and, within some groups, to individuals. Individual performance elements may be defined by narrative standards or may be quantitative indicators defined with specific target performance levels. The overall PFP rating is based on results of corporate and unit indicators and individual performance elements and is used to determine the salary adjustment and any lump sum award. PFP indicators related to three USPS strategic goals--increasing efficiency, improving service, and generating revenues--collectively account for two-thirds of the average participant's rating (see fig.). However, indicator weights vary considerably by participant group, based on the responsibilities and span of control of various positions. As USPS implements requirements of the postal reform law for measuring delivery performance, it will have opportunities to incorporate new indicators into its PFP program, notably for timely delivery of Standard Mail (49 percent of mail volume in fiscal year 2007) and bulk First-Class Mail (25 percent of volume). Once new delivery performance measurement systems are fully implemented and mailers' participation is sufficient to generate representative data, USPS will be able to incorporate new delivery performance indicators into its PFP program. These new indicators would create a more "balanced scorecard" that uses service performance metrics for the mail that is measured to support personal and unit accountability.
FHA’s single-family mortgage programs have played a prominent role in mortgage financing in the wake of the 2007-2009 financial crisis, the housing downturn, and the contraction of the conventional mortgage market. In 2012, FHA insured about $227 billion in single-family mortgages, and the overall insurance portfolio was about $1.1 trillion. The Omnibus Budget Reconciliation Act of 1990 required HUD to take steps to ensure that the insurance fund attained a capital ratio of at least 2 percent by November 2000 and maintained at least that level thereafter. The capital ratio is the fund’s economic value divided by the insurance-in-force (outstanding insurance obligations). The act also required an annual independent actuarial review of the economic net worth and soundness of the insurance fund. The annual actuarial review is now a requirement in the Housing and Economic Recovery Act of 2008, which also requires an annual report to Congress on the results of the review. Under the Federal Credit Reform Act of 1990 (FCRA), FHA and other federal agencies must estimate the net lifetime costs—known as credit subsidy costs—of their loan insurance or guarantee programs and include the costs to the government in their annual budgets. Credit subsidy costs represent the net present value of expected lifetime cash flows, excluding administrative costs.insurance premiums) exceed expected cash outflows (such as insurance claims), a program is said to have a negative credit subsidy rate and generates offsetting receipts that reduce the federal budget deficit. When the opposite is true, the program is said to have a positive credit subsidy rate—and therefore requires appropriations. Generally, agencies must produce annual updates of their subsidy estimates—reestimates—on the basis of information about actual performance and estimated changes in future loan performance. FCRA recognized the difficulty of making credit subsidy estimates that mirrored actual loan performance and provides When estimated cash inflows (such as borrower permanent and indefinite budget authority for reestimates that reflect increased program costs. Upward reestimates increase the federal budget deficit unless accompanied by reductions in other government spending or an increase in receipts. In recent years, HUD’s voucher program annually helped provide affordable rental housing to about 2 million households with very or extremely low incomes. Approximately 2,400 state and local housing agencies administer the voucher program on HUD’s behalf. Under the program, an assisted household pays 30 percent of its monthly adjusted income or the housing-agency established minimum rent—up to $50— toward its monthly rent. The remainder of the rent— the difference between (1) the lesser of the unit’s gross rent (rent plus utilities) or a local “payment standard” and (2) the household’s payment—is paid through a HUD-subsidized “voucher.” The payment standard is based on the HUD- determined fair market rent for the locality, which HUD annually estimates for metropolitan and nonmetropolitan areas.agencies can set payment standards (that is, pay subsidies) between 90 and 110 percent of the fair market rent for their areas. Each year, Congress appropriates funding for subsidies for renewal (existing) and incremental (new) vouchers and administrative expenses. HUD then allocates the program funding to housing agencies, which are expected to use all allocated subsidy funding for authorized program expenses. However, if housing agencies’ allocated amounts exceed the total cost of their program expenses in a given year, their unused subsidy funds must be maintained in subsidy reserve accounts. HUD also pays administrative fees to housing agencies based on the number of units leased (vouchers used) as of the first of each month. As with subsidy funding, if the appropriated amount does not fully cover agencies’ fees, HUD will reduce the amount of funding each housing agency receives to fit within the appropriated amount. The voucher program is not an entitlement program; thus, the amount of budget authority Congress provides through annual appropriations limits the number of households that the program can assist. Historically, appropriations for the voucher program (or for other housing programs) have not been sufficient to assist all households that HUD identified as having worst-case housing needs—households with very low incomes that pay more than 50 percent of their incomes in rent, live in substandard housing, or both. HUD implemented the MTW demonstration program in 1999. As of February 2013, 35 housing agencies were participating. To put in place the innovations intended under the program’s authorizing legislation, agencies may request waivers of certain provisions in the United States Housing Act of 1937, as amended. For example, housing agencies may combine the funding they are awarded annually from different programs— such as public housing capital funds, public housing operating funds, and voucher funds—into a single, authoritywide funding source. The act that created the program requires participating agencies to address three purposes and meet five requirements. The purposes are to (1) reduce costs and achieve greater cost-effectiveness in federal housing expenditures, (2) give families with children incentives to obtain employment and become self-sufficient, and (3) increase housing choices for low-income families. In making these changes, MTW agencies must (1) serve substantially the same total number of eligible low-income families that they would have served had funding amounts not been combined; (2) maintain a mix of families (by family size) comparable to those they would have served without the demonstration; (3) ensure that at least 75 percent of households served are very low income; (4) establish a reasonable rent policy to encourage employment and self-sufficiency; and (5) assure that the housing provided meets HUD’s housing quality standards. The insurance fund’s capital ratio dropped sharply in 2008 and fell below the statutory minimum in 2009, when economic and market developments created conditions that simultaneously reduced the fund’s economic value (the numerator of the ratio) and increased the insurance-in-force (the denominator of the ratio). According to annual actuarial reviews of the insurance fund, the capital ratio fell from about 7 percent in 2006 to 3 percent in 2008 and below 2 percent in 2009 (see fig. 1). In 2012, the ratio fell below zero to negative 1.44 percent. In its November 2012 report to Congress, HUD cited several reasons for the declines from 2011 to 2012. These included the following: First, the estimates of house price appreciation for the 2012 actuarial study were significantly lower than those used for 2011. The difference accounted for an estimated $10.5 billion reduction in the value of the insurance fund compared with the actuary’s 2011 projection of what the fund’s economic value would be at the end of 2012. Second, the continued decline in interest rates causes a substantial loss of revenue. Premium revenues from an existing portfolio go down when more borrowers pay off their mortgages to refinance into lower rates. The capital ratio calculation does not include those borrowers who refinance into new FHA-insured loans. In addition, actuarial projections include higher claim expenses when interest rates stay low because borrowers who are unable to refinance become more willing to default. The effects of continued low interest rates resulted in a reduction of $8 billion in the estimated economic value of the insurance fund (versus the previous year’s projections). Third, FHA directed the actuary to adjust the way losses from defaulted loans and reverse mortgages were reflected in the economic value of the insurance fund. This resulted in an estimated $10 billion reduction to the economic value, compared with the 2011 projections. As the capital ratio declined, the insurance fund’s condition also worsened from the federal budgetary perspective. FHA annually estimates the subsidy costs of new activity for its loan insurance program and also reestimates, or annually updates, prior subsidy cost estimates. Historically, FHA estimated that its loan insurance program was a negative subsidy program. On the basis of these estimates, FHA accumulated substantial balances in a capital reserve account, which represents amounts in excess of those needed for estimated credit subsidy costs and helps cover reestimates reflecting unanticipated increases to those costs (such as higher-than-expected claims). Funds needed to cover estimated subsidy costs are accounted for in the insurance fund’s financing account. In recent years, FHA has transferred billions of dollars annually from the capital reserve account to the financing account, reflecting increases in estimated credit subsidy costs (upward subsidy reestimates). As a result, balances in the capital reserve account fell dramatically, from $19.3 billion at the end of 2008 to an estimated $3.3 billion at the end of 2012 (see fig. 2). At the end of 2012, the financing account held approximately $35.1 billion. If the capital reserve account were to be depleted due to additional upward reestimates, FHA would need to draw on permanent and indefinite budget authority to have sufficient reserves for all future insurance claims on its existing portfolio. The President’s budget for 2013 contained a $9.3 billion upward reestimate in FHA’s credit subsidy costs for the insurance fund. The budget indicated that the reestimate would deplete FHA’s capital reserve account in 2012, potentially causing FHA to draw on $688 million in permanent and indefinite budget authority. However, according to FHA, the agency ultimately did not need to draw on this authority because of premium increases and higher-than- anticipated loan volumes. In its 2012 report to Congress, HUD noted that information (the insurance fund valuation) in the forthcoming President’s budget for 2014 will determine the adequacy of the capital balance in the insurance fund and thus the need to draw on permanent and indefinite budget authority in the current fiscal year. The President’s budget is expected to be released in the spring of 2013. The 2012 actuarial analysis projects that the capital ratio will be positive by 2014 and return to above the statutory 2 percent minimum in 2017. This forecast was based on assumptions such as the level of future lending activity and house prices for multiple years, which are difficult to predict. The forecast also assumed no changes in policy or other actions by FHA that might accelerate “recovery” time. FHA plans policy changes that may accelerate increases to the ratio, including premium increases. For example, effective April 1, 2013, FHA announced that it will increase the annual insurance premiums most new borrowers pay between 0.05 and 0.10 percentage points. The annual premium for loans of $625,500 or more will be set at the statutory maximum of 1.5 or 1.55 percent, depending on the loan-to-value ratio. Effective June 3, 2013, for new loans FHA also announced that it will require borrowers to continue to pay annual premiums, regardless of loan value. Previously, premiums could be eliminated after loans (principal amounts) declined to 78 percent of their original value. Further actions could help to restore FHA’s long-term financial soundness and define its future role. For example, we previously concluded that Congress or HUD needs to determine the economic conditions the insurance fund would be expected to withstand without borrowing from Treasury (drawing on permanent and indefinite budget authority). Considering the importance of defining the economic conditions FHA should withstand, as well as continuing uncertainty over the resolution of Fannie Mae and Freddie Mac and the potential impact of their resolution on FHA, in February 2013 we included FHA in a high-risk area called “modernizing the U.S. financial regulatory system and the federal role in housing finance.” In November 2011, we reported on several weaknesses in FHA’s risk- assessment efforts. Specifically, we noted that: FHA’s risk-assessment strategy was not integrated throughout the organization. Although a consultant’s report recommended that FHA integrate risk assessment and reporting throughout the organization, the Office of Single Family Housing’s 2009 quality control initiative (designed to strengthen internal controls and risk assessment) and the Office of Risk Management’s activities remained separate efforts. FHA officials noted that until the Office of Risk Management (which was created in 2010) set up a governance process, such integration would not be possible. FHA officials stated they were making every effort to help ensure that Office of Risk Management activities complemented program office activities. Contrary to HUD guidance, the Office of Single Family Housing had not conducted an annual, systematic review of risks to its program and administrative functions since 2009. According to an official in this office, management intended to conduct an annual assessment but changes in senior leadership in the office and the few staff available to perform assessments (because of attrition and increased workload) hampered these efforts. The Office of Single Family Housing’s risk-assessment efforts did not include procedures for anticipating potential risks presented by changing conditions. The consultant’s report proposed a reporting process and templates for identifying emerging risks. Office of Risk Management officials told us that once they were operational, risk committees would determine the exact design and content of these reports and templates. We concluded that all these factors limited FHA’s effectiveness in identifying, planning for, and addressing risk. Based on the consultant’s findings, as well as our internal control guidance and HUD guidance, we recommended that FHA (1) integrate the internal quality control initiative of the Office of Single Family Housing into the processes of the Office of Risk Management, (2) conduct an annual risk assessment, and (3) establish ongoing mechanisms—such as using report templates from the consultant’s report—to anticipate and address risks that might be caused by changing conditions. HUD agreed with our recommendations. FHA has begun addressing recommendations made by the consultant. For instance, in June 2012 it finalized the delegations of authority needed for the Office of Risk Management and Regulatory Affairs to establish and maintain risk-management policies, activities, and controls for FHA. It also formed a Single Family Credit Risk Committee and an Operational Risk Committee. FHA also has begun addressing our November 2011 recommendations by taking the following actions: FHA has begun integrating its quality control initiatives into the processes of the Office of Risk Management. For example, the Office of Risk Management and Regulatory Affairs is reviewing the results of quality control activities as it prepares baseline operational risk assessments. FHA developed a plan for conducting an inaugural annual risk assessment (including preparing baseline operational risk assessments) for the Office of Single Family Housing. As previously noted, FHA has created committees to address credit and operational risks. The charters for both committees indicate that they are to discuss and address emerging risks. And, as part of the annual risk-assessment process mentioned above, FHA plans to identify emerging risks. However, some of the initiatives taken in response to our recommendations have not been completed or put fully in place. For example, FHA does not expect to complete its inaugural risk assessment until September 2013. These initiatives are critical to FHA’s efforts to assess and manage risk. Our November 2011 report also identified weaknesses in FHA’s human capital management. Specifically, we noted that leading organizations use workforce planning practices that include defining critical skills and skill gaps, but FHA’s approach did not have mechanisms for doing so or a current workforce plan. contrary to our internal control standards and HUD guidance, FHA also did not have a current succession plan. We noted that succession planning was particularly important because, as of July 2011, almost 50 percent of Single Family Housing staff at headquarters were eligible to retire in the next 3 years. The percentage of staff eligible to retire at the homeownership centers was even higher—63 percent. Additionally, while single-family loan volume grew significantly from 2006 to 2010, staffing levels for the Office of Single Family Housing remained relatively constant. We concluded that without a more comprehensive workforce planning process that included succession planning, FHA’s ability to systematically identify future workforce needs and plan for upcoming retirements was limited. We recommended that FHA develop workforce and succession plans for the Office of Single Family Housing. HUD agreed with our recommendations. Since our November 2011 report, FHA has developed a workforce analysis and succession plan that identifies gaps in critical competencies and additional steps that need to be taken, although the timing of many of these steps is not specified. Completing these steps remains critical to ensuring that the agency has adequate staff to effectively oversee its mortgage insurance programs. We reported in March 2012 that appropriations for the voucher program increased from $14.8 billion in 2005 to $18.4 billion in 2011 (about 24 percent). HUD disburses appropriated funds to housing agencies for program expenses such as subsidy payments to landlords and administrative costs. From 2003 through 2010, housing agencies’ expenditures increased from approximately $11.7 billion to $15.1 billion (about 29 percent). After adjusting for inflation, total expenditures grew by 8.8 percent over this period. Several factors affected voucher program costs from 2003 to 2010, including (1) increases in subsidy costs for existing vouchers, (2) subsidy costs for new vouchers, and (3) administrative fees paid to housing agencies. After adjusting for inflation, subsidy costs for existing vouchers grew 2.4 percent. Two factors generally explain this growth—increasing rents and decreasing incomes. First, rents outpaced inflation. As rents increase, HUD and housing agencies must pay larger subsidies to cover the increases, assuming no changes to household incomes. Second, tenant incomes declined. Specifically, the median annual income of voucher-assisted households fell about 3 percent (from about $11,000 to $10,700, in 2011 dollars). As incomes decline, assisted households pay less towards rent, requiring larger subsidies to cover the difference between rents and tenant payments. subsidy costs for new vouchers grew 4.4 percent, accounting for half the overall constant dollar increase in expenditures. Congress increased the size of the program by adding new vouchers for groups such as homeless veterans and nonelderly disabled households. administrative fees paid to housing agencies grew about 2 percent, although the fees housing agencies have received over the years have been less than the amount for which they were eligible due to reductions in appropriations. Housing agencies noted that the cost of doing business increased. For example, higher gasoline prices contributed to higher inspection costs, especially for housing agencies administering vouchers over large areas. The design and goals of the voucher program also contributed to overall program costs. The voucher program has various features to give priority to the poorest households, and serving these households requires greater subsidies. For instance, housing agencies must lease 75 percent of their new vouchers to extremely low-income households. Despite increases in program costs, our work and other published studies have found that vouchers generally were more cost-effective in providing housing assistance than federal programs designed to build or rehabilitate low- income housing. Since 2003, Congress and HUD have taken some actions to limit the extent of increases, while maintaining assistance for existing program participants. For example, in 2003, Congress changed the voucher program’s funding formula to tie renewal funding for vouchers to actual costs and leasing rates, rather than the number of authorized vouchers (used or unused). Also, each year since 2004, Congress has provided administrative fees that were at least 6 percent lower than the 2003 rate. HUD has also taken steps to increase program efficiencies. For example, according to HUD reports, steps taken by the agency have reduced improper payments (subsidy over- and underpayments) from $1.1 billion in 2000 to $440 million in 2009. These steps include providing housing agencies with fraud detection tools, such as the Enterprise Income Verification system, which makes tenant income and wage data available to housing agencies. This system was fully implemented in 2005. In 2010, HUD began studying the administrative fee structure for the voucher program to ascertain how much it costs a housing agency to run an efficient program. Because the study is ongoing, the extent to which it will identify ways to improve efficiency is not yet clear. We identified several options that if implemented effectively, could reduce voucher program costs or allow housing agencies to assist additional households. Each option would require congressional action to implement. These options, which include rent reform and administrative consolidation, also involve difficult policy decisions that will affect some of the most vulnerable members of the population and alter long-standing program priorities and practices. Improved information on the level of subsidy reserve funding housing agencies should maintain could aid budget decisions and reduce the need for new appropriations. Housing agencies have accumulated subsidy reserves (unspent funds) that Congress could use to (1) reduce program appropriations (through a rescission and offset) and potentially meet other needs or (2) direct HUD to assist more households.agencies may under-lease or receive more funding than they can spend in a year. Unless rescinded and offset, housing agencies can keep unused subsidy funding in reserve accounts and spend it (for authorized expenses) in future years. As of December 31, 2012, 2,178 housing agencies had a total of approximately $1.2 billion in subsidy reserves. HUD has requested the authority to offset and, in some cases, redistribute “excess” reserves (those beyond what is needed to fund defined contingencies). But HUD has not developed specific or consistent criteria defining what constitutes excess reserves or how it would redistribute funding among housing agencies. For example, HUD officials told us that housing agencies should retain approximately 8.5 percent (or 1 month’s worth) of their annual funding allocations in reserves. However, in its 2010 and 2011 budget proposals, HUD defined excess reserves as those above 4 and 6 percent, respectively, of allocated amounts. In our March 2012 report, we concluded that providing Congress with better information on subsidy reserves could help ensure that disbursed funds would be used to assist households rather than remain unused.We recommended that HUD provide information to Congress on (1) the estimated amount of excess subsidy reserves and (2) criteria for how it will redistribute excess reserves among housing agencies. HUD neither agreed nor disagreed with our recommendations. However, HUD officials subsequently told us that, upon request, they provide information to HUD’s Appropriations Committee on subsidy reserve levels, including balances above certain minimum reserve levels. We will continue to monitor the agency’s progress in implementing our recommendations. As we indicated in our March 2012 report, in various budget requests for 2004 through 2012, HUD requested the authority to put in place reforms that could decrease voucher program subsidy costs, administrative costs, or both. These reforms include streamlining complex and burdensome requirements and improving the delivery and oversight of rental assistance. For example, housing agencies must re-examine household income and composition at least annually. HUD wants to extend the time between re-examinations from 1 year to 3 years and between unit inspections from 1 year to 2 years. According to one program administrator, annual re-examinations and inspections account for more than 50 percent of administrative costs in the voucher programs the agency administers. Although some of the changes needed to simplify and streamline the voucher program would require congressional action, HUD’s forthcoming study of the program’s administrative fee structure and the experiences of housing agencies in the MTW program may provide insight into specific reforms to ease administrative burden. We recommended in our March 2012 report that HUD consider proposing to Congress options for streamlining and simplifying the administration of the voucher program and making corresponding changes to the administrative fee formula to reflect any new or revised administrative We stated that such proposals should be informed by requirements.results of HUD’s ongoing administrative fee study and the experience of the MTW program. HUD neither agreed nor disagreed with our recommendations. As of March 2013, HUD had not made such proposals to Congress. If implemented, rent reform (that is, changes to the calculation of households’ payment toward rent) and the consolidation of voucher administration under fewer housing agencies could yield substantial cost savings, allow housing agencies to serve additional households if Congress were to reinvest annual cost savings in the voucher program, or both. Furthermore, these options are not mutually exclusive; that is, cost savings or additional households served could be greater if both options were implemented. Because about 90 percent of voucher program funds are used to pay subsidies, decreasing the subsidy (or, alternatively stated, increasing the household contribution toward rent) will yield the greatest costs savings. As shown in table 1, our March 2012 report estimated the effect of several options that change the minimum rents households must pay or different formulas for calculating what tenants pay. For example, increasing minimum rents to $75 would yield an estimated $67 million in annual cost savings or allow housing agencies to serve an estimated 8,600 additional households. Requiring assisted households to pay 30 percent of their gross income (rather than net income) in rent would yield an estimated annual savings of $513 million or allow housing agencies to serve an estimated 76,000 additional households. While each of these options could reduce costs or create administrative efficiencies—each also involves trade-offs. Under each option, some households would have to pay more in rent than they currently pay. From 2 to 92 percent of households would experience an increase in their monthly payment: setting a minimum rent of $50 would affect the fewest households and increasing rent to 35 percent of adjusted income would affect the most. The options also would have varying effects on different types of households (such as families with children, persons with disabilities, and the elderly). We noted disparities by geographic area (such as high-cost versus low-cost rental markets) as well. For example, setting household rental payments based on a percentage of the applicable fair market rent would place greater burdens on households in high-cost areas. We concluded in our March 2012 report that consolidating voucher program administration under fewer housing agencies could yield a more efficient oversight and administrative structure and cost savings for HUD and housing agencies. HUD spends considerable resources in overseeing the more than 2,400 housing agencies that administer the voucher program. According to a 2008 HUD study, the department dedicated from more than half to two-thirds of its level of oversight to 10 percent of its units (generally, housing agencies that administered 400 or fewer vouchers and about 5 percent of total program funds). According to agency officials, consolidating voucher administration under fewer agencies would decrease HUD’s oversight responsibilities. However, current information on the magnitude of these savings was not available when we conducted our 2012 review. As we reported in April 2012, HUD has not identified standard performance data and indicators needed to evaluate the MTW program. Housing agencies in the MTW program report annually on their activities, which include efforts to reduce administrative costs and encourage residents to work. However, the usefulness of this information is limited because, in some cases, it is not outcome-oriented. For example, for similar activities designed to promote family self-sufficiency, one MTW agency reported only the number of participants, which is generally considered an output, and another did not provide any performance information. In contrast, a third agency reported on the average income of program graduates, which we consider an outcome. To be consistent with the GPRA Modernization Act, HUD’s guidance on reporting performance information should indicate the importance of outcome-oriented Without more specific guidance on the reporting of information.performance information—for example, to report quantifiable and outcome-oriented information—HUD cannot be assured of collecting information that reflects the outcomes of individual activities. Our April 2012 report also noted that HUD has not identified the performance data that would be needed to assess the results of similar MTW activities or of the program as a whole. Researchers and others have noted the limitations of the program’s initial design in terms of evaluation. Specifically, it lacks standard performance data. Obtaining performance information from demonstration programs that are intended to test whether an approach (or any of several approaches) can obtain positive results is critical. This information helps determine whether the program has led to improvements consistent with its purposes. HUD started collecting additional data from MTW agencies (including household size, income, and educational attainment), but has not yet analyzed the data. And since 2009, HUD required agencies to provide information on the impact of activities, including benchmarks and metrics, in their annual MTW reports. While these reports are informative, they do not lend themselves to quantitative analysis because the reporting requirements do not call for standardized data, such as the number of residents who found employment. Whether these data are sufficient to assess similar activities and the program as a whole is not clear, and HUD has not identified the data it would need for such an assessment. HUD also has not established performance indicators for the MTW program. The GPRA Modernization Act requires that federal agencies establish efficiency, output, and outcome indicators for each program activity as appropriate. Internal control standards also require the establishment of performance indicators. As we noted in 2012, specific performance indicators for the MTW program could be based on the three statutory purposes of the program. For example, agencies could report on the savings achieved (reducing costs). However, without performance indicators HUD cannot demonstrate the results of the program. The shortage of analysis and performance indicators has hindered comprehensive evaluation efforts, although such evaluations are key to determining the success of any demonstration program. We recommended that HUD (1) improve its guidance to MTW agencies on providing performance information in their annual reports by requiring that such information be quantifiable and outcome-oriented, (2) develop and implement a plan for quantitatively assessing the effectiveness of similar activities and for the program, and (3) establish performance indicators for the program. HUD generally agreed with our recommendations. Consistent with our recommendations, HUD has taken initial steps to revise performance reporting requirements for MTW agencies, but these requirements had not yet been finalized as of March 2013. Furthermore, as we indicated in our 2012 report, while HUD has identified some lessons learned on an ad hoc basis, it does not have a systematic process in place for identifying such lessons. As previously noted, obtaining impact information from demonstration programs is critical. Since 2000, HUD has identified some activities that could be replicated by other housing agencies. For example, a HUD-sponsored contractor developed five case studies to describe some of the issues involved in implementing the MTW demonstration. However, these and subsequent efforts have shortcomings. In most cases, the practices were chosen based on the opinions of HUD or contracted staff and largely involved anecdotal (or qualitative) data rather than quantitative data. Also, HUD has not established criteria, such as demonstrated performance, for identifying lessons learned or made regular efforts to review and identify lessons learned. Because HUD does not currently have a systematic process for identifying lessons learned, it is limited in its ability to promote useful practices that could be implemented more broadly. Thus, we recommended that HUD create a process to systematically identify lessons learned. In response to this recommendation, HUD stated that once its revised reporting requirements were implemented, the resulting data would inform an effort to establish lessons learned. HUD has policies and procedures in place to monitor MTW agencies but could do more to ensure that MTW agencies demonstrate compliance with statutory requirements and to identify possible risks relating to each agency’s activities. For example, as noted in our 2012 report, HUD has not issued guidance to participating agencies clarifying key program terms, including definitions of the purposes and statutory requirements of the MTW program. Internal control standards require the establishment of clear, consistent goals and objectives. Agencies also must link each of their activities to one of the three program purposes cited in the MTW authorizing legislation. However, HUD has not clearly defined what the language in some of these purposes means, such as “increasing housing choices for low-income families.” HUD officials told us that they plan to update their guidance to MTW agencies to more completely collect information related to the program’s statutory purposes and requirements. They acknowledged that the guidance could be strengthened. As a first step, they noted that they planned to require agencies to define “self- sufficiency” by choosing one of the definitions provided by HUD or creating their own. Without clarifying key terms and establishing a process for assessing compliance with statutory requirements, HUD lacks assurance that agencies are complying with the statute. Additionally, our 2012 report indicated that HUD only recently assessed agencies’ compliance with two self-certified requirements (to serve substantially the same total number of eligible low-income families that they would have served had funding amounts not been combined and ensure that at least 75 percent of households served are very low income). Further, HUD has not assessed compliance with the third self- certified requirement (to maintain a comparable mix of families). Internal control standards require control activities to be in place to address program risks. In addressing these risks, management should formulate an approach for assessing compliance with program requirements. Without a process for systematically assessing compliance with statutory requirements, HUD lacks assurance that agencies are complying with them. Furthermore, as we reported in 2012, HUD has not annually assessed program risks, despite its own requirement to do so, and has not developed risk-based monitoring procedures. HUD’s internal control standards require program offices to perform an annual risk assessment of their programs or administrative functions using a HUD risk- assessment worksheet. By not performing annual risk assessments or tailoring its monitoring efforts to reflect the perceived risk of each MTW agency, HUD lacks assurance that it has properly identified and addressed risks that may prevent agencies from addressing program purposes and meeting statutory requirements. HUD also lacks assurance that it is efficiently using its limited monitoring resources. Finally, our 2012 report indicated that HUD does not have policies or procedures in place to verify the accuracy of key information that agencies self-report, such as the number of program participants and the average income of program graduates. Internal control standards and guidance emphasize the need for federal agencies to have control activities in place to help ensure that program participants report information accurately. For example, HUD staff do not verify self- reported performance information during their reviews of annual reports or annual site visits. GAO guidance on data reliability recommends tracing a sample of data records to source documents to determine whether the data accurately and completely reflect the source documents.performance information, it lacks assurance that this information is accurate. To the extent that HUD relies on this information to assess program compliance with statutory purposes and requirements, its analyses are limited. Because HUD does not verify the accuracy of any reported To improve HUD’s oversight over the MTW program, we recommended in April 2012 that HUD (1) issue guidance that clarifies key program terms, such as the statutory purposes and requirements MTW agencies must meet; (2) develop and implement a systematic process for assessing compliance with statutory requirements; (3) conduct an annual risk assessment for MTW and implement risk-based monitoring policies and procedures; and (4) implement control activities to verify the accuracy of a sample of the performance information that MTW agencies self-report. HUD partially agreed with our recommendations, citing potential difficulties in verifying MTW performance data. HUD also described steps it was taking to improve its guidance to MTW agencies and implement risk-based monitoring procedures. As of March 2013, this guidance had not yet been finalized. Without more complete information on program effectiveness and compliance, it will be difficult for Congress to know whether an expanded MTW program would benefit additional agencies and the residents they serve. Mr. Chairman, Ranking Member Pastor, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to respond to any questions that you may have at this time. For further information about this testimony, please contact me at 202- 512-8678 or sciremj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Daniel Garcia-Diaz, Director; Paige Smith, Assistant Director; Steve Westley; Assistant Director; Stephen Brown; Emily Chalmers; William Chatlos; Cory Marzullo; John McGrail; Marc Molino; Lisa Moore; Daniel Newman; Lauren Nunnally; José R. Peña; Josephine Perez; Beth Reed Fritts; Barbara Roesmann; and Andrew Stavisky. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
HUD operates programs that provide mortgage insurance to homebuyers and subsidize the rents of low-income households. In recent years, HUD's FHA has experienced dramatic growth in its insurance portfolio. Expenditures for HUD's rental voucher program also have risen substantially. Through the MTW demonstration program, HUD has sought to reduce costs and achieve greater cost-effectiveness in federal expenditures for rental housing. This testimony discusses (1) the financial condition of FHA's insurance fund and FHA's risk management, (2) the costs of the voucher program and options to increase its efficiency, and (3) HUD's efforts to evaluate and monitor the MTW program. This testimony draws from GAO reports on FHA's insurance fund and oversight capacity ( GAO-10-827R and GAO-12-15), HUD's voucher program (GAO-12-300), and HUD's MTW program (GAO-12-490). GAO also reviewed updated information on the insurance fund and voucher subsidy reserves as of the end of 2012. The Department of Housing and Urban Development's (HUD) Federal Housing Administration (FHA) faces financial and risk-management challenges. For the fourth straight year, capital reserves for FHA's Mutual Mortgage Insurance Fund are below the statutory minimum. Also, declining balances in the fund's capital reserve account have heightened the possibility that FHA will require additional funds to have sufficient reserves for all future insurance claims on its existing portfolio. Further actions could help to restore FHA's financial soundness. For example, GAO previously concluded that Congress or HUD needs to determine the economic conditions the fund would be expected to withstand without drawing on Department of the Treasury funding. With regard to risk management, FHA has made or plans improvements. For example, FHA implemented an initiative in 2009 to strengthen internal controls and risk assessment for single-family housing and created a risk office in 2010. However, FHA has only recently begun to integrate these activities and conduct annual risk assessments in accordance with HUD guidance. Without integrated and updated risk assessments that identify emerging risks, as GAO recommended, FHA lacks assurance that it has identified all its risks. Congress and HUD have taken steps to limit cost increases in the Housing Choice Voucher (voucher) program while maintaining assistance for existing program participants. Nonetheless, between 2003 and 2010, program expenditures grew about 9 percent (after adjusting for inflation), mainly due to rising rents, declining household incomes, and decisions to expand the number of assisted households. GAO identified options that, if implemented effectively, could reduce the need for new appropriations, cut expenditures, or increase the number of households assisted. These options include (1) reducing the subsidy reserves (unspent funds) of state and local housing agencies that administer the program, (2) streamlining administrative requirements, and (3) implementing rent reforms and consolidating voucher administration. These options would also involve trade-offs, such as higher rent burdens for low-income households. Opportunities exist to improve how HUD evaluates and monitors the Moving to Work (MTW) program, which is intended to give state and local housing agencies flexibility to design and test innovative strategies for providing housing assistance. HUD's guidance does not specify that performance information collected from participating housing agencies be outcome-oriented, and HUD has not identified performance indicators for the program. In addition, HUD has not developed a systematic process for identifying lessons learned from the program, which limits HUD's ability to promote useful practices for broader implementation. HUD also has not taken key monitoring steps set out in internal control standards, such as issuing guidance that defines program terms or assessing compliance with all the program's statutory requirements. As a result, HUD lacks assurance that agencies are complying with statutory requirements. Also, without more complete information on program effectiveness and compliance, it will be difficult for Congress to know whether an expanded MTW program would benefit additional agencies and the residents they serve. Consistent with GAO recommendations, HUD has begun to revise guidance on MTW performance reporting. GAO has made a number of recommendations to improve FHA's risk management, and FHA has taken some actions. GAO also recommended that HUD consider proposing to Congress options for improving the efficiency of the voucher program. HUD neither agreed nor disagreed and has not yet proposed such options. In addition, GAO recommended that HUD improve MTW information and monitoring. HUD partially agreed with these recommendations and has taken initial steps to improve performance data.
Today, federal employees are issued a wide variety of ID cards that are used to access federal buildings and facilities, sometimes solely on the basis of visual inspection by security personnel. These cards generally cannot be used to control access to an agency’s computer systems. Furthermore, many can be easily forged or stolen and altered to permit access by unauthorized individuals. The ease with which traditional ID cards can be forged has contributed to increases in identity theft and related security and financial problems for both individuals and organizations. One means to address such problems is offered by the use of smart cards. Smart cards are plastic devices about the size of a credit card that contain an embedded integrated circuit chip capable of storing and processing data. The unique advantage that smart cards have over traditional cards with simpler technologies like magnetic stripes or bar codes is that they can exchange data with other systems and process information, rather than simply serving as static data repositories. By securely exchanging information, a smart card can help authenticate the identity of the individual possessing the card in a far more rigorous way than is possible with traditional ID cards. A smart card’s processing power also allows it to exchange and update many other kinds of information with a variety of external systems, which can facilitate applications such as financial transactions or other services that involve electronic record-keeping. Figure 1 shows a typical example of a smart card. Smart cards can also be used to significantly enhance the security of an organization’s computer systems by tightening controls over user access. A user wishing to log on to a computer system or network with controlled access must “prove” his or her identity to the system—a process called authentication. Many systems authenticate users merely by requiring them to enter secret passwords. This provides only modest security because passwords can be easily compromised. Substantially better user authentication can be achieved by supplementing passwords with smart cards. To gain access under this scenario, a user is prompted to insert a smart card into a reader attached to the computer as well as type in a password. This authentication process is significantly harder to circumvent because an intruder would not only need to guess a user’s password but also possess that same user’s smart card. Even stronger authentication can be achieved by using smart cards in conjunction with biometrics. Smart cards can be configured to store biometric information (such as fingerprints or iris scans) in an electronic record that can be retrieved and compared with an individual’s live biometric scan as a means of verifying that person’s identity in a way that is difficult to circumvent. An information system requiring users to present a smart card, enter a password, and verify a biometric scan provides what security experts call “three-factor” authentication, the three factors being “something you possess” (the smart card), “something you know” (the password), and “something you are” (the biometric). Systems employing three-factor authentication are considered to provide a relatively high level of security. The combination of smart cards and biometrics can provide equally strong authentication for controlling access to physical facilities. Smart cards can also be used in conjunction with public key infrastructure (PKI) technology to better secure electronic messages and transactions. A properly implemented and maintained PKI can offer several important security services, including assurance that (1) the parties to an electronic transaction are really who they claim to be, (2) the information has not been altered or shared with any unauthorized entity, and (3) neither party will be able to wrongfully deny taking part in the transaction. PKI systems are based on cryptography and require each user to have two different digital “keys”: a public and a private key. Both public and private keys may be generated on a smart card or on a user’s computer. Security experts generally agree that PKI technology is most effective when used in tandem with hardware tokens, such as smart cards. PKI systems use cryptographic techniques to generate and manage electronic “certificates” that link an individual or entity to a given public key. These digital certificates are then used to verify digital signatures and facilitate data encryption. The digital certificates are created by a trusted third party called a certification authority, which is also responsible for providing status information on whether the certificate is still valid or has been revoked or suspended. The PKI software in the user’s computer can verify that a certificate is valid by first verifying that the certificate has not expired and then by checking the online status information to ensure that it has not been revoked or suspended. In addition to enhancing security, smart cards have the flexibility to support a wide variety of uses not related to security, such as tracking itineraries for travelers, linking to immunization or other medical records, or storing cash value for electronic purchases. Currently, a typical smart card can store and process up to 32 kilobytes of data, however newer cards have been introduced that can accommodate 64 kilobytes. The larger a card’s electronic memory, the more functions it can support. Smart cards are grouped into two major classes: “contact” cards and “contactless” cards. Contact cards have gold-plated contacts that connect directly with the read/write heads of a smart card reader when the card is inserted into the device. Contactless cards contain an embedded antenna and work when the card is waved within the magnetic field of a card reader or terminal. Contactless cards are better suited to environments that require quick interaction between the card and the reader, such as places with a high volume of people seeking physical access. For example, the Washington Metropolitan Area Transit Authority has deployed an automated fare collection system using contactless smart cards as a way of speeding patrons’ access to the Washington, D.C., subway system. Smart cards can be configured to include both contact and contactless capabilities, but two separate interfaces are needed because standards for the technologies are very different. Since the 1990s, the federal government has promoted the use of smart card technology as one option for improving security over buildings and computer systems. In 1996, OMB, which has statutory responsibility to develop and oversee policies, principles, standards, and guidelines—used by agencies for ensuring the security of federal information and systems— tasked GSA with taking the lead in facilitating a coordinated interagency management approach for the adoption of smart cards across government. Because the value of a smart card is greatly enhanced if it can be used with multiple systems at different agencies, GSA worked with NIST and smart card vendors to develop the Government Smart Card Interoperability Specification, which defined a uniform set of commands and responses for smart cards to use in communicating with card readers. This specification defined a software interface for smart card systems that served to bridge the significant incompatibilities among vendors’ proprietary systems. Vendors could meet the specification by writing software for their cards that translated their unique command and response formats to the government standard. NIST completed the first version of the interoperability specification in August 2000. However, this and subsequent versions did not fully define all implementation details, and therefore the extent to which systems using the specification could interoperate was limited. In 2003, OMB created the Federal Identity Credentialing Committee to make policy recommendations and develop the Federal Identity Credentialing component of the Federal Enterprise Architecture to include processes such as identity proofing and credential management. In February 2004, the Federal Identity Credentialing Committee issued the Government Smart Card Handbook on the use of smart card–based systems in badge, identification, and credentialing systems with the objective of helping agencies plan, budget, establish, and implement identification and credentialing systems for government employees and their agents. In September 2004, we reported that nine agencies were planning or implementing agencywide smart card initiatives. Some of these initiatives included the Defense’s Common Access Card (CAC), which had 3.2 million cards in use at the time of our review, and the Department of State’s Domestic Smart Card Access Control project, which had issued 25,000 cards as of September 2004. In August 2004, the President issued HSPD-12, which required the Department of Commerce to develop a new standard for secure and reliable forms of ID for federal employees and contractors by February 27, 2005. The directive defined secure and reliable ID as meeting four control objectives. Specifically, credentials must be: based on sound criteria for verifying an individual employee’s identity; strongly resistant to identity fraud, tampering, counterfeiting, and terrorist exploitation; rapidly authenticated electronically; and issued only by providers whose reliability has been established by an official accreditation process. The directive stipulated that the standard include graduated criteria, from least secure to most secure, to ensure flexibility in selecting the appropriate level of security for each application. In addition, the directive required agencies to implement the standard for IDs issued to federal employees and contractors in order to gain physical access to controlled facilities and logical access to controlled information systems, to the maximum extent practicable, by October 27, 2005. In response to HSPD-12, NIST published FIPS 201, titled “Personal Identity Verification of Federal Employees and Contractors” on February 25, 2005. The standard specifies the technical requirements for personal identity verification (PIV) systems to issue secure and reliable identification credentials to federal employees and contractors for gaining physical access to federal facilities and logical access to information systems and software applications. Smart cards are the primary component of the envisioned PIV system. The FIPS 201 standard is composed of two parts. The first part, PIV-I, sets standards for PIV systems in three areas: (1) identity proofing and registration, (2) card issuance and maintenance, and (3) protection of card applicants’ privacy. OMB directed agencies to implement the first two requirements by October 27, 2005, but did not require agencies to implement the privacy provisions until they start issuing FIPS 201 compliant identity cards, which is not expected until October 2006. To verify individuals’ identities, agencies are required to adopt an accredited identity proofing and registration process that is approved by the head of the agency and includes initiating or completing a background investigation, such as a National Agency Check with Written Inquiries (NACI), or ensuring that one is on record for all employees and contractors; conducting and adjudicating a Federal Bureau of Investigation (FBI) National Criminal History Fingerprint Check (fingerprint check) for all employees and contractors prior to credential issuance; requiring applicants to appear in person at least once before the issuance of a PIV card; requiring applicants to provide two original forms of identity source documents from an OMB-approved list of documents; and ensuring that no single individual has the capability to issue a PIV card without the cooperation of another authorized person (separation of duties principle). Agencies are further required to adopt an accredited card issuance and maintenance process that is approved by the head of the agency and includes standardized specifications for printing photographs, names, and other information on PIV cards; loading relevant electronic applications into a card’s memory; capturing and storing biometric and other data; issuing and distributing digital certificates; and managing and disseminating certificate status information. The process must satisfy the following requirements: ensure complete and successful adjudication of background investigations required for federal employment and revoke PIV cards if the results of investigations so justify; when issuing a PIV card to an employee or contractor, verify that the individual is the same as the applicant approved by the appropriate authority; and issue PIV cards only through accredited systems and providers. Finally, agencies are required to perform the following activities to protect the privacy of the applicants, including assigning an individual to the role of senior agency official for privacy to oversee privacy-related matters in the PIV system, conducting a comprehensive privacy impact assessment on systems containing personal information for the purpose of implementing a PIV system, providing full disclosure of the intended uses of the PIV card and related privacy implications to the applicants, utilizing security controls described in NIST guidance to accomplish privacy goals where applicable, and ensuring that implemented technologies in PIV systems do not erode privacy protections. Figure 2 illustrates PIV-I provisions for identity proofing and registration, card issuance and maintenance, and protection of applicants’ privacy. The second part of the FIPS 201 standard, PIV-II, provides technical specifications for interoperable smart card-based PIV systems. Agencies are required to begin issuing credentials that meet these provisions by October 27, 2006. The requirements include the following: specifications for the components of the PIV system that employees and contractors will interact with, such as PIV cards, card and biometric readers, and personal identification number (PIN) input devices; security specifications for the card issuance and management a suite of authentication mechanisms supported by the PIV card and requirements for a set of graduated levels of identity assurances; physical characteristics of PIV cards, including requirements for both contact and contactless interfaces and the ability to pass certain durability tests; mandatory information that is to appear on the front and back of the cards, such as a photograph, the full name, card serial number and issuer identification; and technical specifications for electronic identity credentials (i.e., smart cards) to support a variety of authentication mechanisms, including PINs, PKI encryption keys and corresponding digital certificates, biometrics (specifically, representations of two fingerprints), and unique cardholder identifier numbers. As outlined in a NIST special publication, agencies can choose between two alternate approaches to become FIPS 201 compliant, depending on their previous experience with smart cards. The guidance sets different specifications for each approach. One approach is to adopt “transitional” card interfaces, based on the Government Smart Card Interoperability Specification (GSC-IS). Federal agencies that have already implemented smart card systems based on the GSC-IS can elect to adopt the transitional card interface specification to meet their responsibilities for compliance with part II of the standard. The other approach is to immediately adopt the “end-point” card interfaces, which are fully compliant with the FIPS 201 PIV-II card standard. All agencies without previous large scale smart card implementations are expected to proceed with implementing PIV systems that meet the end-point interface specification. Figure 3 shows an example of a FIPS 201 card. NIST has issued several other special publications providing supplemental guidance on various aspects of the FIPS 201 standard, including guidance on verifying that agencies or other organizations have the proper systems and administrative controls in place to issue PIV cards, and technical specifications for implementing the required encryption technology. Additional information on NIST’s special publications is provided in appendix II. In addition, NIST was responsible for developing a suite of tests to be used by approved commercial laboratories in validating whether commercial products for the smart card and the card interface are in conformance with FIPS 201. NIST developed the test suite and designated several laboratories as interim NIST PIV Program testing facilities in August 2005. The designated facilities were to use the NIST test suite to validate commercial products required by FIPS 201 so that they could be made available for agencies to acquire as part of their PIV-II implementation efforts. According to NIST, during the next year, these laboratories will be assessed for accreditation for PIV testing. Once accreditation is achieved, the “interim” designation will be dropped. OMB is responsible for ensuring that agencies comply with the standard, and in August 2005, it issued a memorandum to executive branch agencies with instructions for implementing HSPD-12 and the new standard. The memorandum specifies to whom the directive applies; to what facilities and information systems FIPS 201 applies; and, as outlined below, the schedule that agencies must adhere to when implementing the standard: October 27, 2005— for all new employees and contractors, adhere to the identity proofing, registration, card issuance, and maintenance requirements of the first part (PIV-I) of the standard. Implementation of the privacy requirements of PIV-I was deferred until agencies are ready to start issuing FIPS 201 credentials. October 27, 2006—start issuing cards that comply with the second part (PIV-II) of the standard. Agencies may defer implementing the biometric requirement until the NIST guidance is final. October 27, 2007—verify and/or complete background investigations for all current employees and contractors (Investigations of individuals who have been employees for more than 15 years may be delayed past this date.) October 27, 2008—complete background investigations for all individuals who have been federal agency employees for over 15 years. OMB guidance also includes specific time frames in which NIST and GSA must provide additional guidance, such as technical references and Federal Acquisition Regulations. GSA, in collaboration with the Federal Identity Credentialing Committee, the Federal Public Key Infrastructure Policy Authority, OMB, and the Smart Card Interagency Advisory Board—which GSA established to address government smart card issues and standards—developed the Federal Identity Management Handbook. This handbook was intended to be a guide for agencies implementing HSPD-12 and FIPS 201 and includes guidance on specific courses of action, schedule requirements, acquisition planning, migration planning, lessons learned, and case studies. It is to be periodically updated; the most current draft version of the handbook was released in September 2005. In addition, on August 10, 2005, GSA issued a memorandum to agency officials that specified standardized procedures for acquiring FIPS 201- compliant commercial products that have passed NIST’s conformance tests. According to the GSA guidance, agencies are required to use these standardized acquisition procedures when implementing their FIPS 201 compliant systems. Figure 4 is a time line that illustrates when FIPS 201 and additional guidance were issued as well as the major deadlines for implementing the standard. The six agencies that we reviewed—Defense, Interior, DHS, HUD, Labor, and NASA—have each taken actions to begin implementing the FIPS 201 standard. Their primary focus has been on actions to address the first part of the standard, including establishing appropriate identity proofing and card issuance policies and procedures. For example, five of the six agencies had instituted policies to require that at least a successful fingerprint check be completed prior to issuing a credential; and the sixth agency, Defense, was in the process of having such a policy instituted. Regarding other requirements, efforts were still under way. For example, Defense and NASA reported that they were still making modifications to their background check policies. Four of the six agencies were still updating their policies and procedures or gaining formal agency approval for them. Labor and HUD officials had completed modifications of their policies and gained approval for their PIV-I processes. Agencies have begun to take actions to address the second part of the standard, which focuses on interoperable smart card systems. Defense and Interior, for example, have conducted assessments of technological gaps between their existing systems and the infrastructure required by FIPS 201, but they have not yet developed specific designs for card systems that meet FIPS 201 interoperability requirements. Defense has been working on implementing smart card technology since 1993, when the Deputy Secretary of Defense issued a policy directive that called for the implementation of the CAC program, a standard smart card- based identification system for all active duty military personnel, civilian employees, and eligible contractor personnel. Defense began testing the CAC in October 2000 and started to implement it departmentwide in November 2001. Currently, the CAC program is the largest smart card deployment within the federal government, with approximately 3.8 million cards considered active or in use as of May 2005. The CAC addresses both physical and logical access capabilities and incorporates PKI credentials. Defense officials have taken steps to implement PIV-I requirements but have not yet completed all planned actions. For example, according to agency officials, Defense implemented its first PIV-I compliant credential issuance station, accredited and trained designated individuals to issue credentials, and took steps to better secure access to Defense personnel data. However, at the time of our review, Defense was still drafting modifications to the department’s background check policy to meet PIV-I requirements, and agency officials expected to issue a revised policy by the end of December 2005. Work was also under way to modify an automated system used by contractors to apply for the CAC to comply with the PIV-I background check requirements for contractors. To address PIV-II, Defense program officials conducted an assessment to identify the technological gaps between their existing CAC infrastructure and the infrastructure required to meet PIV-II interoperability requirements. This assessment identified that of the 245 requirements specified by FIPS 201, the CAC did not support 98 of those requirements, which led to a strategy to implement each of the needed changes. Some of the changes include deploying cards that contain both contact and contactless capabilities; ensuring that information on the cards is in both visual and electronic form; and ensuring that the electronic credentials stored on PIV cards to verify a cardholder’s identity contain all required data elements, including the cardholder’s PIN, PIV authentication data (PKI encryption keys and corresponding digital certificates), and two fingerprints. Additionally, program officials prepared rough cost estimates for specific elements of their planned implementation, such as cards and card readers. Program officials have also begun developing agency-specific PIV applications to be stored on the cards. However, Defense has not yet developed a specific design for a card system that meets FIPS 201 interoperability requirements. In January 2002, Interior’s Bureau of Land Management (BLM) launched a smart card pilot project to help improve security over its sites and employees. About 2,100 employees were given smart cards for personal ID and for access to sites in the pilot program. Having successfully implemented the smart card pilot at BLM, Interior began a program to implement smart cards agencywide. According to program officials, the agencywide smart card system is in compliance with the GSC-IS specification. As of October 19, 2005, the department had deployed approximately 20,000 smart cards, providing access control for approximately 25 buildings. Interior officials have taken steps to implement PIV-I requirements but have not yet had their system accredited or approved, as required by the standard. For example, Interior revised its policy on identity proofing and registration to require at least a fingerprint check be completed before issuing a credential. Regarding card issuance and maintenance processes, Interior revised its policies to include steps to ensure the completion and successful adjudication of a NACI or equivalent background investigation for all employees and contractor personnel. Additionally, Interior officials reported they had completed more than 90 percent of all required background checks for existing employees, and had signed a contract to develop a Web-enabled PIV-I identity proofing and registration process, which may eventually replace the current manual process. However, as of November 2005, Interior’s identity proofing, registration, issuing, and maintenance processes had not been accredited or approved by the head of the agency. Regarding privacy protection, according to the officials, they had completed two privacy impact assessments on systems containing personal information for the purposes of implementing PIV-I. To meet PIV-II requirements, Interior officials reported that they had established a pilot PKI and had also conducted a gap analysis to identify specific areas in which their existing smart card system does not meet the FIPS 201 standard. In the absence of approved FIPS 201 compliant products, they had not developed a specific design for a card system that meets FIPS 201 interoperability requirements. Prior to the issuance of FIPS 201, DHS developed a smart card-based identification and credentialing pilot project that was intended to serve as a comprehensive identification and credentialing program for the entire department when fully deployed. This effort was based on the GSC-IS specification and was intended to use PKI technology for logical access and proximity cards that are read by electronic readers to gain building access. As of November 2005, program officials indicated that they had deployed approximately 150 cards as part of this effort. However, OMB directed DHS to not issue smart cards until it had developed and implemented a system based on cards that are fully compliant with the PIV-II section of FIPS 201. DHS officials have taken steps to implement PIV-I requirements but, as of November 2005, were still making necessary modifications to their policies and procedures. For example, DHS revised its policy on identity proofing and registration to require at least a fingerprint check be completed before issuing a credential. However, other DHS actions to implement PIV-I were still under way. For example, according to DHS officials, they had not yet fully implemented the requirements to ensure that background checks are successfully adjudicated or to establish a credential revocation process. DHS officials further stated that they were finalizing a security announcement that would outline the PIV-I process. DHS officials had begun to take actions to meet PIV-II requirements. According to program officials, to help plan and prepare the agency for deployment, they conducted a survey of all DHS components to determine the types of information systems their various components had deployed. However, officials have planned to wait until approved FIPS 201 products and services are available before purchasing any equipment or undergoing any major deployment of a PIV-II compliant system. NASA officials indicated that they had been working to improve their identity and credentialing process since 2000. Prior to the issuance of FIPS 201, NASA officials were planning for the implementation of the One NASA Smart Card Badge project. This project was intended to be deployed agencywide and was being designed to provide GSC-IS compliant smart cards for identity, physical access, and logical access to computer systems. However, NASA officials were directed by OMB to not implement this system because it had not initiated large-scale deployment of its smart cards prior to July 2005. In the meantime, NASA has been utilizing proximity cards, which are read by electronic readers, to gain building access. NASA officials have taken steps to implement PIV-I requirements; but, as of November 2005, they were still making necessary modifications to their policies and procedures. For example, regarding identity proofing and registration, NASA officials stated that they had modified their policy to address the fingerprint check requirement. According to NASA officials, they have also implemented a process for gathering all required data elements from individuals, with the exception of the biometric data. In addition, NASA officials conducted an analysis of how FIPS 201 requirements impact security within NASA. Other NASA actions to implement PIV-I were still under way. For example, NASA was getting its revised policy approved which specifies the completion and successful adjudication of the NACI. Regarding privacy protection, NASA was updating its privacy impact assessments for relevant systems containing personal information for the purpose of implementing PIV-I. NASA has begun to take actions to implement PIV-II requirements. NASA officials said they were planning to modify their existing PKI to issue digital certificates that can be used with the PIV cards that will be issued under FIPS 201. In the absence of approved FIPS 201 compliant products, NASA has not developed specific designs for a card system that meets FIPS 201 interoperability requirements. HUD did not have an existing smart card program in place prior to HSPD- 12. Like NASA, HUD controls physical access to its buildings by using proximity cards that are read by electronic readers. HUD officials reported that they have taken steps to implement PIV-I requirements. To meet identity proofing and registration practices, for example, officials modified their policies to require that at least a fingerprint check be completed before issuing a credential. Policies regarding card issuance and maintenance processes were also modified to ensure that all necessary steps were in place regarding the completion and successful adjudication of a NACI or another equivalent background investigation. Additionally, HUD issued guidelines explaining policies and procedures to ensure that the issuance of credentials complies with PIV-I. Program officials have also been analyzing the differences between their existing processes and those required by FIPS 201. As of January 2006, HUD’s identity proofing, registration, issuing, and maintenance processes were approved by HUD’s Assistant Secretary for Administration, as required by PIV-I. Finally, regarding privacy protections, officials have drafted a document describing how personal information will be collected, used, and protected throughout the lifetime of the FIPS 201 cards. Thus far, HUD’s actions related to PIV-II have been limited to analyzing their needs and planning for physical security and information technology infrastructure requirements. HUD officials said they had developed rough estimates to determine how much implementing FIPS 201 would cost. In the absence of approved FIPS 201 compliant products, HUD officials have not developed a specific design for a card system that meets FIPS 201 interoperability requirements. Like HUD, Labor did not have an existing smart card program in place prior to HSPD-12. Labor currently utilizes a nonelectronic identity card that contains an employee’s photograph and identifying information. The identity cards can only be used for physical access, which is granted by security personnel once they have observed the individual’s identity card. As of November 2005, Labor officials reported that they had implemented the major requirements of PIV-I. As an example of Labor’s efforts to implement identity proofing and registration requirements, the officials modified their policies to require that, at minimum, a fingerprint check is conducted and successfully adjudicated prior to issuing the credential. Regarding issuance and maintenance processes, Labor officials modified their policies to ensure all necessary steps were in place regarding the completion and successful adjudication of the NACI or another equivalent background investigation. In addition, the officials reported that they had implemented a system of tracking metrics for background investigations to ensure that they are completed and successfully adjudicated. Labor officials stated that they had not made substantial progress toward implementing PIV-II because they were waiting for compliant FIPS 201 products to become available before making implementation decisions. The federal government faces a number of significant challenges to implementing FIPS 201, including testing and acquiring compliant products within OMB’s mandated time frames; reconciling divergent implementation specifications; assessing risks associated with implementing the recently- chosen biometric standard; incomplete guidance regarding the applicability of FIPS 201 to facilities, people, and information systems; and planning and budgeting with uncertain knowledge and the potential for substantial cost increases. Addressing these challenges will be critical in determining whether agencies will be able to meet fast-approaching implementation deadlines and in ensuring that agencies’ FIPS 201 systems are interoperable with one another. Based on OMB and GSA guidance, all commercial products, such as smart cards, card readers, and related software, are required to successfully complete interdependent tests before agencies can purchase them for use in their FIPS 201 compliant systems. These tests include (1) conformance testing developed by NIST to determine whether individual commercial products conform to FIPS 201 specifications, (2) performance and interoperability testing to be developed by GSA to ensure that compliant products can work together to meet all the performance and interoperability requirements specified by FIPS 201, and (3) agencies’ testing to determine whether the products will work satisfactorily within the specific system environments at each of the agencies. Because it is difficult to predict how long each of these tests will take, and because they must be done in sequence, fully tested FIPS 201 compliant products may not become available for agencies to acquire in time for them to begin issuing FIPS 201 compliant ID cards by OMB’s deadline of October 27, 2006. According to NIST officials, conformance testing of individual commercial products, based on the test suite developed by NIST, was authorized to begin on November 1, 2005. The officials indicated that it would take a minimum of several weeks to test and approve a product— assuming the product turned out to be fully FIPS 201 compliant—and would more likely take significantly longer. Experience with similar NIST conformance testing regimes, such as FIPS 140-2 cryptography testing, has shown that this process can actually take several months. According to a FIPS 140-2 consulting organization, the variability in the time it takes to test products depends on (1) the complexity of the product, (2) the completeness and clarity of the vendor’s documentation, (3) how fast the vendor is able to answer questions and resolve issues raised during testing, and (4) the current backlog of work encountered in the lab. According to officials from NIST and the Smart Card Alliance, these factors are likely to keep FIPS 201 compliant products from completing conformance testing and becoming available for further testing until at least the early part of 2006. Furthermore, once commercial products pass conformance testing, they must then go through performance and interoperability testing. These tests are intended to ensure that the products meet all the performance and interoperability requirements specified by FIPS 201. According to GSA, which was developing the tests, they can only be conducted on products that have passed NIST conformance testing. GSA will also conduct performance and interoperability tests on other products that are required by FIPS 201, but not within the scope of NIST’s conformance tests, such as smart card readers, fingerprint capturing devices, and software required to program the cards with employees’ data. At the time of our review, GSA officials stated that they were developing initial plans for these tests and had planned to have the tests ready in March 2006. GSA officials indicated that once they finalized the tests, they estimated that it would take approximately 2 to 3 months to test each product. Officials stated that they do not expect to have multiple products approved until May 2006, at the earliest. Vendors with approved products and services will be awarded a blanket-purchase agreement, making them available for agencies to acquire. According to GSA officials, there will be a modification to the Federal Acquisition Regulation to require that agencies purchase PIV products through this blanket-purchase agreement. Prior to purchasing commercial products, each agency will also need to conduct its own testing to determine how well the products will work in conjunction with the rest of the agency’s systems. According to agency officials, this process could take from 1 to 8 months, depending on the size of the agency. For example, GSA officials estimated that a small agency could complete this testing in about 1 month. Defense officials, in contrast, estimated it would take them about 4 months to conduct testing, and Interior officials have stated that, based on their prior experience, it would take 6 months to conduct the testing. When Defense initially implemented their CAC system, it took 8 months to conduct testing. Following this series of tests, agencies must also acquire products—which could add at least an additional month to the process—and install them at agency facilities. OMB, which is tasked with ensuring compliance with the standard, has not indicated how it plans to monitor agency progress in developing systems based on FIPS 201 compliant products. For example, OMB has not stated whether it will require agencies to report on the status of their FIPS 201 implementations in advance of the October 2006 deadline. While in the best case scenario it may be possible for some agencies to purchase compliant products and begin issuing FIPS 201 compliant cards to employees by OMB’s deadline of October 27, 2006, it will likely take significantly longer for many other agencies. With compliance testing scheduled to be complete in early 2006 and at least two sets of additional testing required, each of which could potentially take many months, many agencies are likely to be at risk of not meeting the deadline to begin issuing FIPS 201 compliant credentials. Given these uncertainties, it will be important to monitor agency progress and completion of key activities to ensure that the goals of HSPD-12 are being met. Recognizing that some agencies, such as Defense, have significant investments in prior smart card technology that does not comply with the new standard, NIST, in supplemental guidance on FIPS 201, allowed such agencies to address the requirements of FIPS 201 by adopting a “transitional” smart card approach. According to the guidance, the transitional approach should be based on the existing GSC-IS specification and should be a temporary measure prior to implementing the full FIPS 201 specification, known as the “end point” specification. Agencies without existing large-scale smart card systems were to implement only systems that fully conform to the end-point specification. NIST deferred to OMB to set time frames for when agencies adopting the transitional approach would be required to reach full compliance with the end-point specification. However, OMB has not yet set these time frames and has given no indication of how or when it plans to address this issue. The provision for transitional FIPS 201 implementations in NIST’s guidance acknowledges that agencies with fully implemented GSC-IS smart card systems may already be meeting many of the security objectives of FIPS 201 and that it may be unreasonable to require them to replace all of their cards and equipment within the short time frames established by HSPD-12. However, according to NIST officials, the transitional specification is not technically interoperable with the end-point specification. Thus, cards issued by an agency implementing a transitional system will not be able to interoperate with systems at agencies that have implemented the end-point specification until those agencies implement the end-state specification, too. Although allowing for the transitional approach to FIPS 201 compliance in their guidance, NIST stated that agencies should implement the end-point specification directly, wherever possible. According to NIST, agencies that adopt the transitional specification will have to do more work than if they immediately adopt the end-point specification. Specifically, major technological differences between the two interfaces will require agencies to conduct two development efforts—one to adopt the transitional specification and then another at a later date to adopt the end-point specification. Agencies with substantial smart card systems already deployed—such as Defense and Interior—have chosen the transitional option because they believe it poses fewer technical risks than the end-point specification, which is a new standard. These agencies do not plan to implement end- point systems by the October 2006 deadline for PIV-II compliance, nor have they determined when they will have end-point systems in place. According to OMB, these agencies will be allowed to meet OMB’s October 2006 deadline by implementing the transitional specification. Defense officials stated that, based on their past experience in implementing the CAC system, they believe the transitional approach will entail fewer development problems because it involves implementing hardware and software that is similar to their current system. Further, Defense officials indicated that implementing the end-point specification would be risky. For example, Defense officials conducted a technical evaluation, which determined that the specification was incomplete. The officials stated that they would not plan to adopt the end-point specification until at least one other agency has demonstrated a successful implementation. Similarly, Interior officials said they also plan to use products based on the transitional specification until approved end-point products are readily available. While NIST and OMB guidance on FIPS 201 compliance allows agencies to meet the requirements of HSPD-12 using two divergent specifications that lead to incompatible systems, it does not specify when agencies choosing the transitional approach need to move from that approach to the end-point specification. Until OMB provides specific deadlines for when agencies must fully implement the end-point specification, achieving governmentwide interoperability—one of the goals of FIPS 201—may not be achieved. One of the major requirements of FIPS 201 is that electronic representations of two fingerprints be stored on each PIV card. In January 2005, NIST issued initial draft guidance for storing electronic images of fingerprints on PIV cards in accordance with a preexisting standard. NIST based its draft guidance on the fact that the existing fingerprint image standard is internationally recognized and thus can facilitate interoperability among multiple vendors’ products. When agency officials and industry experts reviewed and commented on the initial draft guidance, they were strongly opposed to the use of fingerprint images, arguing instead for a more streamlined approach that would take less electronic storage space on the cards and could be accessed more quickly. According to industry experts, because the large amount of memory required for images can only be accessed very slowly, it could take approximately 30 seconds for card readers to read fingerprint information from an electronic image stored on a card—a length of time that would likely cause unacceptable delays in admitting individuals to federal buildings and other facilities. Instead of relying on electronic images, agency officials and industry experts advocated that the biometric guidance instead be changed to require the use of “templates” extracted from fingerprint “minutiae.” A minutiae template is created by mathematically extracting the key data points related to breaks in the ridges of an individual’s fingertip. As shown in figure 6, the most basic minutiae are ridge endings (where a ridge ends) and bifurcations (where a single ridge divides into two). Using minutiae templates allows for capturing only the critical data needed to confirm a fingerprint match, and storing just those key data points rather than a full representation of an individual’s fingerprint. Thus, this technique requires much less storage space than a full electronic image of a fingerprint. An additional benefit of using minutiae templates is rapid processing capability. Because minutiae data require a much smaller amount of storage space than fingerprints in image format, the smaller data size allows for decreased transmission time of fingerprint data between the cards and the card readers—approximately 7 to 10 seconds, according to industry experts at Smart Card Alliance. Short transmission times are especially important for high traffic areas such as entrances to federal buildings. Despite these advantages, existing minutiae template technology suffers from two significant drawbacks. One disadvantage is that vendors’ techniques for converting fingerprint images to minutiae are generally proprietary and incompatible; a minutiae template that one vendor uses cannot be used by another. Another disadvantage of template technology is its questionable reliability. Different algorithms for extracting minutiae produce templates with varying reliability in producing accurate matches with the original fingerprints. To resolve these issues, NIST began systematically testing minutiae template algorithms submitted by 14 vendors to determine if it is possible to adopt a standard minutiae template that can accurately match templates to individuals. NIST officials anticipate that testing will be completed by February 2006, when they expect to be able to determine the accuracy and level of interoperability that can be achieved for the 14 vendors being tested, using standard minutiae templates. In December 2005, NIST officials stated that they had conducted enough tests to determine that the reliability, accuracy, and interoperability of minutiae data among these 14 vendors were generally within the bounds of what was likely to be required for many applications of the technology. However they noted that the tests showed that the products of the 14 vendors varied significantly in their reliability and accuracy—by as much as a factor of 10. NIST officials expect that once they complete testing in February 2006, they will have sufficient data to establish the reliability and accuracy of each of the 14 vendors. Despite the fact that the testing of minutiae template technology was still under way, NIST was requested by the Executive Office of the President to issue revised draft guidance that replaced the previously proposed image standard with a minutiae standard. While the minutiae standard resolves the problems of storage and access speed associated with the image standard, it opens new questions about how agencies should choose vendor implementations of the minutiae standard, due to their varying reliability and accuracy. Agencies will need to ensure that the vendors they select to provide minutiae template matching will provide systems that provide the level of reliability and accuracy needed for their applications. Agencies will also have to determine the level of risk they are willing to accept that fingerprints may be incorrectly matched or incorrectly fail to match. According to NIST officials, agencies may find that in order to preserve interoperability across agencies’ systems, they may need to allow for less reliability and accuracy in determining whether fingerprints match. This reduction in reliability and accuracy—and the associated higher security risk—could pose problems for secure facilities that require very high levels of assurance. Further, according to NIST officials, any vendors beyond the 14 currently being tested would need to undergo similar testing in order to determine their levels of reliability and accuracy. If agencies do not fully understand the implications of the variation in accuracy among the biometric vendors, the security of government facilities could be compromised and interoperability between agencies could be hindered. FIPS 201 and OMB’s related guidance provide broad and general criteria regarding the facilities, people, and information systems that are subject to the provisions of FIPS 201. For instance, according to FIPS 201, compliant identification credentials must be issued to all federal employees and contractors who require physical access to federally controlled facilities— including both federally owned buildings and leased space—and logical access to federally controlled information systems. OMB guidance adds that agencies should make risk-based decisions on how to apply FIPS 201 requirements to individuals and information systems that do not fit clearly into the specified categories. For example, OMB guidance states that applicability of FIPS 201 for access to federal systems from a nonfederally controlled facility (such as a researcher uploading data through a secure Web site or a contractor accessing a government system from its own facility) should be based on a risk determination made by following NIST guidance on security categorizations for federal information and information systems (FIPS 199). Although this guidance provides general direction, it does not provide sufficient specificity regarding when and how to apply the standard. For example, OMB’s guidance does not explain how NIST’s security categories can be used to assess types of individuals accessing government systems. FIPS 199 provides guidance only on how to determine the security risk category of government information and information systems, not how such a category relates to providing access from nonfederally controlled facilities. As a result, agencies are unlikely to make consistent determinations about when and how to apply the standard. HUD is one example of an agency that has not been able to finalize how it would implement FIPS 201, with regard to allowing access to federal information systems from remote locations; and according to a HUD official, they are considering multiple options. Further, the guidance does not address all categories of people who may need physical and logical access to federal facilities and information systems. Specifically, for individuals such as foreign nationals, volunteers, and unpaid researchers, meeting some of the FIPS 201 requirements—such as conducting a standard background investigation—may be difficult. For example, Defense and NASA employ a significant number of foreign nationals—individuals who are not U.S. citizens and work outside the U.S. foreign nationals generally cannot have their identity verified through the standard NACI process. In order to conduct a NACI, an individual must have lived in the United States long enough to have a traceable history, which may not be the case for foreign nationals. According to NASA officials, approximately 85 percent of NASA’s staff at its Jet Propulsion Laboratory are foreign nationals. However, OMB’s guidance for such individuals states only that agencies should conduct an “equivalent investigation,” without providing any specifics that would ensure the consistent treatment of such individuals. Specifically regarding foreign nationals, the Smart Card Interagency Advisory Board (IAB) and OMB have recognized that FIPS 201 may not adequately address this issue. The IAB obtained data from agencies who hire foreign nationals to more specifically identify the issues with identity proofing of foreign nationals. According to IAB representatives, these data were provided to OMB. In addition, OMB indicated that they planned to establish an interagency working group to assess whether additional guidance is necessary concerning background investigations for foreign nationals. However, no time frames have been set for issuing revised or supplemental guidance regarding foreign nationals. In addition to foreign nationals, other types of workers also have not been addressed. For example, Interior has approximately 200,000 individuals that serve as volunteers, some of whom require access to facilities and information systems. OMB’s guidance provides no specifics on what criteria to use to make a risk-based decision pertaining to access to facilities and systems by volunteers. Moreover, the guidance is not clear on the extent to which FIPS 201 should be implemented at all federal facilities. While the standard provides for a range of identity authentication assurance levels based on the degree of confidence in the identity of cardholders, it does not provide guidance on establishing risk levels for specific facilities or how to implement FIPS 201 based on an assessment of the risks associated with facilities. Therefore, agencies such as HUD, which has 21 field offices with five or fewer employees, and Interior, which has 2,400 field offices, many of which are also quite small, do not have the guidance necessary to make decisions consistently about how to implement FIPS 201 at each of their facilities. Depending on how risks are assessed, to implement a FIPS 201 compliant access control system at each facility could represent a significant expense, including possibly acquiring and installing card readers, network infrastructure, biometric hardware and software. As of November 2005, OMB officials reported no specific plans to supplement or revise its FIPS 201 implementation guidance to address these issues. Without more specific and complete guidance on the scope of implementing FIPS 201 regarding individuals, facilities, and information systems, the objectives of HSPD-12 could be compromised. For instance, agencies could adopt varying and inconsistent approaches for identity proofing and issuing PIV cards to foreign nationals and volunteers needing physical and logical access to their facilities and information systems, thus undermining the objective of FIPS 201 to establish consistent processes across the government. Variations from the standard could also pose problems within each agency. Specifically, if agencies choose to make exceptions to implementing FIPS 201 requirements for specific categories of individuals, information systems, or facilities, such exceptions could undermine the security objectives of the agency’s overall FIPS 201 implementation. Conversely, some agencies could expend resources implementing FIPS 201 infrastructure at locations where it is not really needed or may impose unnecessary constraints on access, due to the lack of clarity of FIPS 201 guidance. Agencies have been faced with having to potentially make substantial new investments in smart card technology systems with little time to adequately plan and budget for such investments and little cost information about products they will need to acquire. To comply with budget submission deadlines, agencies would have had to submit budget requests for new systems to meet the October 2006 PIV-II deadline in the fall of 2004, several months prior to the issuance of FIPS 201. If a major information technology (IT) investment were expected, agencies also would have had to submit business cases at the same time. Agencies were not in a position to prepare such documentation in the fall of 2004, nor were they able to determine whether a major new investment would be required. As part of the annual federal budget formulation process, agencies are required to submit their budget requests 1 year in advance of the time they expect to spend the funds. In addition, in the case of major IT investments, which could include new smart-card based credentialing systems, OMB requires agencies to prepare and submit formal businesses cases, which are used to demonstrate that agencies have adequately defined the proposed cost, schedule, and performance goals for the proposed investments. In order for agencies to prepare business cases for future funding requests, they need to conduct detailed analyses such as a cost- benefit analysis, a risk analysis, and an assessment of the security and privacy implications of the investment. However, agencies have lacked the information necessary to conduct such reviews. For example, agencies have not had reliable information about product costs and cost elements, which are necessary for cost-benefit analyses. In addition, without FIPS 201 compliant products available for review, agencies have been unable to adequately conduct risk analyses of the technology. Most importantly, the lack of FIPS 201 compliant products has inhibited planning for addressing the investment’s security and privacy issues. Several officials from the agencies we reviewed reported that they based their cost estimates on experience with existing smart card systems because they could not predict the costs of FIPS 201 compliant products. For example, HUD officials reported that in order to formulate their preliminary budget, they developed implementation estimates based on discussions with various vendors about similar technology as well as discussions with other agencies regarding their past experiences with smart card implementation. Furthermore, Defense and Labor officials reported that the only information they had on which to base costs was Defense’s CAC—a smart card system that has significant differences from FIPS 201. While it is not known how much FIPS 201 compliant systems will cost, OMB maintains that agencies should be able to fund their new FIPS 201 compliant systems with funds they are spending on their existing ID and credentialing systems. However, officials from agencies such as HUD— who stated that they estimate that implementing a FIPS 201 system will cost approximately 400 percent more than their existing identification system—have indicated that existing funds will be insufficient to finance implementation of the FIPS 201 system. As of November 2005, OMB officials did not report any specific plans to monitor agencies’ funding of FIPS 201 compliant card systems to ensure that the systems can be implemented in a timely fashion. As a result of the lack of cost and product information necessary for the development of accurate budget estimates, agency officials believe they may not have sufficient funds to implement FIPS 201 within the time frames specified by OMB. Further, the overall implementation schedules and planned performance of FIPS 201 investments across the government could be affected. Agencies have been focusing their efforts on a range of actions to establish appropriate identity proofing and card issuance policies and procedures to meet the first part of the FIPS 201 standard. They have also begun to take actions to implement new smart card-based ID systems that will be compliant with the second part of the standard. With the deadline for implementing the second part of the standard approaching in October 2006, the government faces significant challenges in implementing the requirements of the standard. Several of these challenges do not have easy solutions— testing and acquiring compliant smart cards, card readers, and other related commercial products within OMB-mandated deadlines; implementing fully functional systems; and planning and budgeting for FIPS 201 compliance with uncertain knowledge. OMB officials have not indicated any plans to monitor the impact on agencies of the constrained testing time frames and funding uncertainties, which could put agencies at risk of not meeting the compliance goals of HSPD-12 and FIPS 201. Without close monitoring of agency implementation progress through, for example, establishing an agency reporting process, it could be difficult for OMB to fulfill its role of ensuring that agencies are in compliance with the goals of HSPD-12. Other challenges have arisen because guidance to agencies has been incomplete. For example, time frames have not been set for agencies implementing transitional smart card systems to migrate to the fully compliant end-point specification. Additionally, existing guidance related to the draft biometric standard does not offer the necessary information to help agencies understand the implications of variation in the reliability and accuracy of fingerprint matching among the biometric systems being offered by vendors. Further, complete guidance for implementing FIPS 201 with regard to specific types of individuals, facilities, and information systems has not been established. Without more complete time frames and guidance, agencies may not be able to meet implementation deadlines; and more importantly, true interoperability among federal government agencies’ smart card programs—one of the major goals of FIPS 201—could be jeopardized. We recommend that the Director, OMB, take steps to closely monitor agency implementation progress and completion of key activities by, for example, establishing an agency reporting process, to fulfill its role of ensuring that agencies are in compliance with the goals of HSPD-12. Further, we also recommend that the Director, OMB, amend or supplement governmentwide policy guidance regarding compliance with the FIPS 201 standard to take the following three actions: provide specific deadlines by which agencies implementing transitional smart card systems are to meet the “end-point” specification, thus allowing for interoperability of smart card systems across the federal government; provide guidance to agencies on assessing risks associated with the variation in the reliability and accuracy among biometric products, so that they can select vendors that best meet the needs of their agencies while maintaining interoperability with other agencies, and clarify the extent to which agencies should make risk-based assessments regarding the applicability of FIPS 201 to specific types of facilities, individuals, and information systems, such as small offices, foreign nationals, and volunteers. The updated guidance should (1) include criteria that agencies can use to determine precisely what circumstances call for risk-based assessments and (2) specify how agencies are to carry out such risk assessments. We received written comments on a draft of this report from the Administrator of E-Government and Information Technology of OMB, the Acting Associate Administrator of GSA, and the Deputy Secretary of Commerce. Letters from these agencies are reprinted in appendixes III through V. We received technical comments from the Director of the Card Access Office for Defense and, a Special Agent at OPM, via e-mail, which we incorporated as appropriate. We also received written technical comments from the Assistant Secretary for Administration for HUD and the Assistant Secretary of Policy, Management, and Budget for Interior. Additionally, representatives from NASA and Labor indicated via e-mail that they reviewed the draft report and did not have any comments. Officials from DHS did not respond to our request for comments. Officials from GSA, Commerce, HUD, Defense, Interior, and OPM generally agreed with the content of our draft report and our recommendations and provided updated information and technical comments, which have been incorporated where appropriate. In response to our recommendation that OMB monitor agency implementation progress and completion of key activities, OMB stated that it would continue to oversee agency implementation using their existing management and budget tools to ensure compliance. However, as agencies continue to move forward with implementing FIPS 201, we believe that in order for OMB to successfully monitor agencies’ progress, it will be essential for OMB to develop a process specifically for agencies to report on their progress toward implementing the standard. Regarding our recommendation to OMB to amend or supplement government wide policy guidance regarding compliance with the HSPD-12 standard, OMB stated that it did not think that its guidance was incomplete. Officials stated that their guidance provides the appropriate balance between the need to aggressively implement the President’s deadlines, while ensuring agencies have the flexibility to implement HSPD-12, based on the level of risk their facilities and information systems present. While we agree that it is important for agencies to have flexibility in implementing the standard based on their specific circumstances, we believe that OMB has not provided agencies with adequate guidance in order for them to make well-informed, risk-based decisions about when and how to apply the standard for important categories of individuals and facilities that affect multiple agencies. For example, while multiple agencies employ foreign nationals to work at their facilities, OMB does not provide guidance on how agencies should investigate these foreign nationals prior to allowing them to access U.S. government facilities and information systems. Similarly, several agencies maintain very small facilities, yet OMB does not provide guidance on the extent to which FIPS 201 should be applied at these facilities. In addition, guidance has not been provided on assessing risks associated with the variation in the reliability and accuracy among biometric products, so that agencies can select vendors that best meet their needs while maintaining interoperability across the government. Additionally, OMB indicated that at this time, they do not have a full understanding of whether interoperability among the transitional and end- point specifications is a concern and stated that it can not comment on our recommendation to specify the time frame for when agencies implementing transitional smart card systems are to implement the end- point specification. However, our review showed that these two specifications are not interoperable and, until all agencies implement the end-state specification the interoperability objective of HSPD-12 may not be achieved. In commenting on our report, GSA stated that they agreed with our findings, conclusions, and recommendations. In addition, it provided us with technical comments that we incorporated as appropriate. It also suggested that in order to fully demonstrate the scope and scale of implementing HSPD-12 and FIPS 201 that we provide, as background, the current state of identity management systems across the government and industry and the impact of compliance with HSPD-12. We believe that we have adequately explained the benefits of using smart card-based ID systems and have outlined several of the significant requirements that agencies must implement as part of their new PIV systems. In Commerce’s written comments, it stated that our report was fair and balanced. It also provided technical comments that we incorporated, where appropriate. Additionally, OMB and Commerce noted that NIST’s biometric specification had recently been revised. We have made changes to our report to reflect the revised specification. Unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Homeland Security, Labor, Interior, Defense, and HUD; the Directors of OMB, OPM and NIST; the Administrators of NASA and GSA; and interested congressional committees. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-6240 or by e-mail at koontzl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and key contributors to this report are listed in appendix VI. Our objectives were to determine (1) actions that selected federal agencies have taken to implement the new standard and (2) challenges that federal agencies are facing in implementing the standard. We reviewed Homeland Security Presidential Directive 12 (HSPD-12), Federal Information Processing Standards 201 (FIPS 201), related National Institute of Standards and Technology (NIST) special publications, Office of Management and Budget (OMB) guidance, and General Services Administration (GSA) guidance. On a nonprobability basis and using the results of the 2005 Federal Computer Security Report Card—which includes an assessment of agencies’ physical security—and the results of our previous reports on federal agencies’ progress in adopting smart card technology, we selected six agencies that represented a range of experience in implementing smart card-based identification systems. For example, we included agencies with no prior experience implementing smart card systems as well as agencies with years of experience in implementing smart card systems. The agencies we selected were the Departments of Defense, Interior, Homeland Security (DHS), Housing and Urban Development (HUD), Labor, and the National Aeronautics and Space Administration (NASA). To obtain information on the actions these agencies have taken and plan to take to implement the standard, we analyzed documentation such as agencies’ implementation plans. We also interviewed officials from selected agencies to obtain additional information on the actions their agencies took. We reviewed the completeness and appropriateness of actions reported to us. However, we did not determine whether agencies were fully compliant with HSPD-12 and FIPS 201. To identify challenges and barriers associated with implementing the new federal identification (ID) standard, we analyzed documentation and interviewed program officials as well as officials from GSA, NIST, the Office of Personnel Management (OPM), and OMB. In addition, we presented the preliminary challenges that we identified to agency officials to obtain their feedback and concurrence on the challenges. We performed our work at the offices of Defense, Interior, DHS, HUD, Labor, NASA, NIST, OMB, OPM, and GSA in the Washington, D.C., metropolitan area from April 2005 to December 2005, in accordance with generally accepted government auditing standards. NIST has issued several special publications providing supplemental guidance on various aspects of the FIPS 201 standard. These special publications are summarized below. SP 800-73 is a companion document to FIPS 201 that specifies the technical aspects of retrieving and using the identity credentials stored in a personal identity verification (PIV) card’s memory. This special publication aims to promote interoperability among PIV systems across the federal government by specifying detailed requirements intended to constrain vendors’ interpretation of FIPS 201. SP 800-73 also outlines two distinct approaches that agencies might take to become FIPS 201 compliant and specifies a set of requirements for each: one set for “transitional” card interfaces that are based on the Government Smart Card Interoperability Specification (GSC-IS), Version 2.1 and another set for “end-point” card interfaces that are more fully compliant with the FIPS 201 PIV-II card specification. Federal agencies that have implemented smart card systems based on the GSC-IS can elect to adopt the transitional specification as an intermediate step before moving to the end-point specification. However, agencies with no existing implementation are required to implement PIV systems that meet the end-point specification. SP 800-73 includes requirements for both the transitional and end-point specifications and is divided into the following three parts: Part 1 specifies the requirements for a PIV data model that is designed to support dual interface (contact and contactless) cards. The mandatory data elements outlined in the data model are common to both the transitional and end-point interfaces and include strategic guidance for agencies that are planning to take the path of moving from the transitional interfaces to the end-point interfaces. Part 2 describes the transitional interface specifications and is for use by agencies with existing GSC-IS based smart card systems. Part 3 specifies the requirements for the end-point PIV card and associated software applications. SP 800-79 is a companion document to FIPS 201 that describes the attributes that a PIV card issuer—an organization that issues PIV cards that comply with FIPS 201—should exhibit in order to be accredited. Agency officials need complete, accurate, and trustworthy information about their PIV credential issuers to make decisions about whether to authorize their operation. Agencies can use the guidelines in this document to certify and accredit the reliability of such organizations. There are four phases (initiation, certification, accreditation, and monitoring) in the certification and accreditation processes that cover a PIV credential issuer’s ability to carry out its primary responsibilities in identity proofing and registration, PIV card creation and issuance, and PIV card life-cycle management. By following the guidelines, federal agencies should be able to accomplish the following: Satisfy the HSPD-12 requirement that all identity cards be issued by PIV credential issuers whose reliability have been established by an official accreditation process; Ensure that a PIV credential provider (1) understands the requirements in FIPS 201; (2) is reliable in providing the required services; and (3) provides credible evidence that its processes were implemented as designed and adequately documented the processes in its operations plan; Ensure more consistent, comparable, and repeatable assessments of the required attributes of PIV credential issuers; Ensure more complete, reliable, and trusted identification of federal employees and contractors in controlling access to federal facilities and information systems; and Make informed decisions in the accreditation process in a timely manner and by using available resources in an efficient manner. FIPS 201 specifies mechanisms for implementing cryptographic techniques to authenticate cardholders, secure the information stored on a PIV card, and secure the supporting infrastructure. SP 800-78 contains the technical specifications needed to implement the encryption technology specified in the standard, including cryptographic requirements for PIV keys (e.g., algorithm and key size) and information stored on the PIV card (i.e., requiring the use of digital signatures to protect the integrity and authenticity of information stored on the card). In addition, this document specifies acceptable algorithms and key sizes for digital signatures on PIV status information (i.e., digital signatures on the certificate revocation lists or online certificate status protocol status response messages) and card management keys, which are used to secure information stored in the PIV card. For additional information on public key infrastructure technology, see our 2001 report. SP 800-85 outlines a suite of tests to validate a software developer’s PIV middleware and card applications to determine whether they conform to the requirements specified in SP 800-73. This special publication also includes detailed test assertions that provide the procedures to guide the tester in executing and managing the tests. This document is intended to allow (1) software developers to develop PIV middleware and card applications that can be tested against the interface requirements specified in SP 800-73; (2) software developers to develop tests that they can perform internally for their PIV middleware and card applications during the development phase; and (3) certified and accredited test laboratories to develop tests that include the test suites specified in this document and that can be used to test the PIV middleware and card applications for conformance to SP 800-73. SP 800-87 outlines the organizational codes necessary to establish the unique cardholder identifier numbers. In addition to the person named above, Devin Cassidy, Derrick Dicoi, Neil Doherty, Sandra Kerr, Steven Law, Shannin O’Neill, and Amos Tevelow. The interface between the application software and the application platform (i.e., operating system), across which all services are provided. The process of confirming an asserted identity with a specified or understood level of confidence. The granting of appropriate access privileges to authenticated users. Measures of an individual’s unique physical characteristics or the unique ways that an individual performs an activity. Physical biometrics include fingerprints, hand geometry, facial patterns, and iris and retinal scans. Behavioral biometrics include voice patterns, written signatures, and keyboard typing techniques. A digital record of an individual’s biometric features. Typically, a “livescan” of an individual’s biometric attributes is translated through a specific algorithm into a digital record that can be stored in a database or on an integrated circuit chip. The set of command and response messages that allow card readers to communicate effectively with the chips embedded on smart cards. A digital representation of information that (1) identifies the authority issuing the certificate; (2) names or identifies the person, process, or equipment using the certificate; (3) contains the user’s public key; (4) identifies the certificate’s operational period; and (5) is digitally signed by the certificate authority issuing it. A certificate is the means by which a user is linked—”bound”—to a public key. The assurance that information is not disclosed to unauthorized entities or computer processes. A smart card that can exchange information with a card reader without coming in physical contact with the reader. Contactless smart cards use 13.56 megahertz radio frequency transmissions to exchange information with card readers. An object such as a smart card that identifies an individual as an official representative of a government agency. The result of a transformation of a message by means of a cryptographic system using digital keys such that a relying party can determine (1) whether the transformation was created using the private key that corresponds to the public key in the signer's digital certificate and (2) whether the message has been altered since the transformation was made. Digital signatures may also be attached to other electronic information and programs so that the integrity of the information and programs may be verified at a later time. The electronic equivalent of a traditional paper-based credential—a document that vouches for an individual’s identity. The process of determining to what identity a particular individual corresponds. The set of physical and behavioral characteristics by which an individual is uniquely recognizable. The process of providing sufficient information, such as identity history, credentials, and documents, to facilitate the establishment of an identity. The ability of two or more systems or components to exchange information and to use the information that has been exchanged. Software that allows applications running on separate computer systems to communicate and exchange data. Key data points—especially ridge bifurcations and end lines—within an individual’s fingerprint that can be extracted and used to match against the same individual’s live fingerprint. A communications protocol that is used to determine whether a public key certificate is still valid or has been revoked or suspended. A smart card that contains stored identity credentials—such as a photograph, digital certificate and cryptographic keys, or digitized fingerprint representations—that is issued to an individual so that the claimed identity of the cardholder can be verified against the stored credentials by another person or through an automated process. An accredited and certified organization that procures FIPS 201 compliant blank smart cards, initializes them with appropriate software and data elements for the requested identity verification and access control application, personalizes the cards with the identity credentials of the authorized cardholders, and delivers the personalized cards to the authorized cardholders along with appropriate instructions for protection and use. An entity that authenticates an individual’s identity applying for a PIV card by checking the applicant’s identity source documents through an identity proofing process, and to ensures that a proper background check was completed before the credential and the PIV card is issued to the individual. The ability of an individual to control when and on what terms his or her personal information is collected, used, or disclosed. A system of hardware, software, policies, and people that, when fully and properly implemented, can provide a suite of information security assurances—including confidentiality, data integrity, authentication, and nonrepudiation—that are important in protecting sensitive communications and transactions. The expectation of loss expressed as the probability that a particular threat will exploit a particular vulnerability with a particular harmful result. A tamper-resistant security device—about the size of a credit card—that relies on an integrated circuit chip for information storage and processing. A statement published by organizations such as NIST, Institute of Electrical and Electronics Engineers, International Organization for Standardization, and others on a given topic—specifying the characteristics that are usually measurable, and must be satisfied in order to comply with the standard.
Many forms of identification (ID) that federal employees and contractors use to access government-controlled buildings and information systems can be easily forged, stolen, or altered to allow unauthorized access. In an effort to increase the quality and security of federal ID and credentialing practices, the President directed the establishment of a governmentwide standard--Federal Information Processing Standard (FIPS) 201--for secure and reliable forms of ID based on "smart cards" that use integrated circuit chips to store and process data with a variety of external systems across government. GAO was asked to determine (1) actions that selected federal agencies have taken to implement the new standard and (2) challenges that federal agencies are facing in implementing the standard. The six agencies we reviewed--Defense, Interior, Homeland Security, Housing and Urban Development (HUD), Labor, and the National Aeronautics and Space Administration (NASA)--had each taken actions to begin implementing the FIPS 201 standard. Their primary focus has been on actions to address the first part of the standard, which calls for establishing appropriate identity proofing and card issuance policies and procedures and which the Office of Management and Budget (OMB) required agencies to implement by October 27, 2005. Agencies had completed a variety of actions, such as instituting policies to require that at least a successful fingerprint check be completed prior to issuing a credential. Regarding other requirements, however, efforts were still under way. For example, Defense and NASA reported that they were still modifying their background check policies. Based on OMB guidance, agencies have until October 27, 2006, to implement the second part of the standard, which requires them to implement interoperable smart-card based ID systems. Agencies have begun to take actions to address this part of the standard. For example, Defense and Interior conducted assessments of technological gaps between their existing systems and the infrastructure required by FIPS 201 but had not yet developed specific designs for card systems that meet FIPS 201 interoperability requirements. The federal government faces significant challenges in implementing FIPS 201, including (1) testing and acquiring compliant commercial products--such as smart cards and card readers--within required time frames; (2) reconciling divergent implementation specifications; (3) assessing the risks associated with specific vendor implementations of the recently chosen biometric standard; (4) incomplete guidance regarding the applicability of FIPS 201 to facilities, people, and information systems; and (5) planning and budgeting with uncertain knowledge and the potential for substantial cost increases. Until these implementation challenges are addressed, the benefits of FIPS 201 may not be fully realized. Specifically, agencies may not be able to meet implementation deadlines established by OMB, and more importantly, true interoperability among federal government agencies' smart card programs--one of the major goals of FIPS 201--may not be achieved.
Overall, our High-Risk Series has served to identify and help resolve serious government weaknesses in areas that involve substantial resources and provide critical services to the public. Since we began reporting on high-risk areas, the government has taken high-risk problems seriously and has made long-needed progress toward correcting them. With that in mind, we designated the federal oversight of food safety as a high-risk area to raise the priority and visibility of the need to transform the federal government’s oversight system. Since 1990, GAO has reported on government operations that we identified as high risk and has periodically reported on the status of progress to address high-risk areas and updated our high-risk list. Historically, high- risk areas have been so designated because of traditional vulnerabilities related to their greater susceptibility to fraud, waste, abuse, and mismanagement. As our high-risk program has evolved, we have increasingly used the high-risk designation to draw attention to areas needing broad-based transformations to achieve greater economy, efficiency, effectiveness, accountability, and sustainability of selected key government programs and operations. In determining whether a government program or operation is high risk, we consider whether it has national significance or a management function that is key to performance and accountability. Further, we consider qualitative factors, such as whether the risk involves public health or safety, service delivery, national security, national defense, economic growth, or privacy or citizens’ rights; or could result in significantly impaired service, program failure, injury or loss of life, or significantly reduced economy, efficiency, or effectiveness. Clearly, these factors weighed heavily into our deliberations to place the federal oversight of food safety on our high-risk list. We remove a high-risk designation when legislative and agency actions, including those in response to our recommendations, result in significant and sustainable progress toward resolving a high-risk problem. Key determinants include a demonstrated strong commitment to and top leadership support for addressing problems, the capacity to do so, a corrective action plan, and demonstrated progress in implementing corrective measures. The sustained attention and commitment by Congress and agencies to resolve serious, long-standing high-risk problems have paid off; because of sufficient progress, we were able to remove the high-risk designation from 18 areas—more than half of our original list. As we have with areas previously removed from the high-risk list, we will continue to monitor these programs, as appropriate, to ensure that the improvements we have noted are sustained. For areas that remain on our high-risk list for 2007, there has been important—but varying levels of—progress. Top administration officials have expressed their commitment to ensuring that high-risk areas receive adequate attention and oversight. The Office of Management and Budget (OMB) has led an initiative to prompt agencies to develop detailed action plans for each area on our high-risk list. These plans are intended to identify specific goals and milestones that address and reduce the risks we identified within each high-risk area. Further, OMB has encouraged agencies to consult with us regarding the problems our past work has identified and the many recommendations for corrective actions we have made. While progress on developing and implementing plans has been mixed, concerted efforts by agencies and ongoing attention by OMB are critical. In addition to the programs that remain on the list, we recently designated three new areas as high risk, including the need to transform federal oversight of food safety. For these recently added areas, along with those remaining on the list, we expect that continued perseverance will ultimately yield significant benefits. To begin to address the weaknesses in federal oversight of food safety, executive agencies can start by implementing our recommendations intended to improve the problems we previously identified. Further, continued congressional oversight, including today’s hearing, and additional legislative action will also be key to achieving progress, particularly in addressing challenges in the broad- based transformation needed to promote the safety and integrity of the nation’s food supply. For several years, we have reported on issues that suggest that food safety could be designated as a high-risk area because of the need to transform the federal oversight framework to reduce risks to public health as well as the economy. Specifically, the patchwork nature of the federal food oversight system calls into question whether the government can plan more strategically to inspect food production processes, identify and react more quickly to outbreaks of contaminated food, and focus on promoting the safety and the integrity of the nation’s food supply. This challenge is even more urgent since the terrorist attacks of September 11, 2001, heightened awareness of agriculture’s vulnerabilities to terrorism, such as the deliberate contamination of food or the introduction of disease to livestock, poultry, and crops. An accidental or deliberate contamination of food or the introduction of disease to livestock, poultry, and crops could undermine consumer confidence in the government’s ability to ensure the safety of the U.S. food supply and have severe economic consequences. Agriculture, as the largest industry and employer in the United States, generates more than $1 trillion in economic activity annually, or about 13 percent of the gross domestic product. The value of U.S. agricultural exports exceeded $68 billion in fiscal year 2006. An introduction of a highly infectious foreign animal disease, such as avian influenza or foot-and-mouth disease, would cause severe economic disruption, including substantial losses from halted exports. Similarly, food contamination, such as the recent E. coli outbreaks, can harm local economies. For example, industry representatives estimate losses from the recent California spinach E. coli outbreak to range from $37 million to $74 million. While 15 agencies collectively administer at least 30 laws related to food safety, the two primary agencies are the U.S. Department of Agriculture (USDA), which is responsible for the safety of meat, poultry, and processed egg products, and the Food and Drug Administration (FDA), which is responsible for virtually all other foods. Among other agencies with responsibilities related to food safety, the National Marine Fisheries Service (NMFS) in the Department of Commerce conducts voluntary, fee- for-service inspections of seafood safety and quality; the Environmental Protection Agency (EPA) regulates the use of pesticides and maximum allowable residue levels on food commodities and animal feed; and the Department of Homeland Security (DHS) is responsible for coordinating agencies’ food security activities. The food safety system is further complicated by the subtle differences in food products that dictate which agency regulates a product as well as the frequency with which inspections occur. For example, how a packaged ham and cheese sandwich is regulated depends on how the sandwich is presented. USDA inspects manufacturers of packaged open-face meat or poultry sandwiches (e.g., those with one slice of bread), but FDA inspects manufacturers of packaged closed-face meat or poultry sandwiches (e.g., those with two slices of bread). Although there are no differences in the risks posed by these products, USDA inspects wholesale manufacturers of open-face sandwiches sold in interstate commerce daily, while FDA inspects manufacturers of closed-face sandwiches an average of once every 5 years. This federal regulatory system for food safety, like many other federal programs and policies, evolved piecemeal, typically in response to particular health threats or economic crises. During the past 30 years, we have detailed problems with the current fragmented federal food safety system and reported that the system has caused inconsistent oversight, ineffective coordination, and inefficient use of resources. Our most recent work demonstrates that these challenges persist. Specifically: Existing statutes give agencies different regulatory and enforcement authorities. For example, food products under FDA’s jurisdiction may be marketed without the agency’s prior approval. On the other hand, food products under USDA’s jurisdiction must generally be inspected and approved as meeting federal standards before being sold to the public. Under current law, thousands of USDA inspectors maintain continuous inspection at slaughter facilities and examine all slaughtered meat and poultry carcasses. They also visit each processing facility at least once during each operating day. For foods under FDA’s jurisdiction, however, federal law does not mandate the frequency of inspections. Federal agencies are spending resources on overlapping food safety activities. USDA and FDA both inspect shipments of imported food at 18 U.S. ports of entry. However, these two agencies do not share inspection resources at these ports. For example, USDA officials told us that all USDA-import inspectors are assigned to, and located at, USDA-approved import inspection facilities and some of these facilities handle and store FDA-regulated products. USDA has no jurisdiction over these FDA-regulated products. Although USDA maintains a daily presence at these facilities, the FDA-regulated products may remain at the facilities for some time awaiting FDA inspection. In fiscal year 2003, USDA spent almost $16 million on imported food inspections, and FDA spent more than $115 million. Food recalls are voluntary, and federal agencies responsible for food safety have no authority to compel companies to carry out recalls— with the exception of FDA’s authority to require a recall for infant formula. USDA and FDA provide guidance to companies for carrying out voluntary recalls. We reported that USDA and FDA can do a better job in carrying out their food recall programs so they can quickly remove potentially unsafe food from the marketplace. These agencies do not know how promptly and completely companies are carrying out recalls, do not promptly verify that recalls have reached all segments of the distribution chain, and use procedures that may not be effective to alert consumers to a recall. The terrorist attacks of September 11, 2001, have heightened concerns about agriculture’s vulnerability to terrorism. The Homeland Security Act of 2002 assigned DHS the lead coordination responsibility for protecting the nation against terrorist attacks, including agroterrorism. Subsequent presidential directives further define agencies’ specific roles in protecting agriculture and the food system against terrorist attacks. We reported that in carrying out these new responsibilities, agencies have taken steps to better manage the risks of agroterrorism, including developing national plans and adopting standard protocols. However, we also found several management problems that can reduce the effectiveness of the agencies’ routine efforts to protect against agroterrorism. For example, there are weaknesses in the flow of critical information among key stakeholders and shortcomings in DHS’s coordination of federal working groups and research efforts. More than 80 percent of the seafood that Americans consume is imported. We reported in 2001 that FDA’s seafood inspection program did not sufficiently protect consumers. For example, FDA tested about 1 percent of imported seafood products. We subsequently found that FDA’s program has improved: More foreign firms are inspected, and inspections show that more U.S. seafood importers are complying with its requirements. Given FDA officials’ concerns about limited inspection resources, we also identified options, such as using personnel in the National Oceanic and Atmospheric Administration’s (NOAA) Seafood Inspection Program to augment FDA’s inspection capacity or state regulatory laboratories for analyzing imported seafood. FDA agreed with these options. In fiscal year 2003, four agencies—USDA, FDA, EPA, and NMFS— spent a total of $1.7 billion on food safety-related activities. USDA and FDA together were responsible for nearly 90 percent of federal expenditures for food safety. However, these expenditures were not based on the volume of foods regulated by the agencies or consumed by the public. The majority of federal expenditures for food safety inspection were directed toward USDA’s programs for ensuring the safety of meat, poultry, and egg products; however, USDA is responsible for regulating only about 20 percent of the food supply. In contrast, FDA, which is responsible for regulating about 80 percent of the food supply, accounted for only about 24 percent of expenditures. We have cited the need to integrate the fragmented federal food safety system as a significant challenge for the 21st century, to be addressed in light of the nation’s current deficit and growing structural fiscal imbalance. The traditional incremental approaches to budgeting will need to give way to more fundamental reexamination of the base of government. While prompted by fiscal necessity, such a reexamination can serve the vital function of updating programs to meet present and future challenges within current and expected resource levels. To help Congress review and reconsider the base of federal spending, we framed illustrative questions for decision makers to consider. While these questions can apply to other areas needing broad-based transformation, we specifically cited the myriad of food safety programs managed across several federal agencies. Among these questions are the following: How can agencies partner or integrate their activities in new ways, especially with each other, on crosscutting issues, share accountability for crosscutting outcomes, and evaluate their individual and organizational contributions to these outcomes? How can agencies more strategically manage their portfolio of tools and adopt more innovative methods to contribute to the achievement of national outcomes? Integration can create synergy and economies of scale and can provide more focused and efficient efforts to protect the nation’s food supply. Further, to respond to the nation’s pressing fiscal challenges, agencies may have to explore new ways to achieve their missions. We have identified such opportunities. For example, as I already mentioned, USDA and FDA spend resources on overlapping food safety activities, and we have made recommendations designed to reduce this overlap. Similarly, regarding FDA’s seafood inspection program, we have discussed options for FDA to use personnel at NOAA to augment FDA’s inspection capacity. Many of our recommendations to agencies to promote the safety and integrity of the nation’s food supply have been acted upon. Nevertheless, as we discuss in the 2007 High-Risk Series, a fundamental reexamination of the federal food safety system is warranted. Such a reexamination would need to address criticisms that have been raised about USDA’s dual mission as both a promoter of agricultural and food products and an overseer of their safety. Taken as a whole, our work indicates that Congress and the executive branch can and should create the environment needed to look across the activities of individual programs within specific agencies and toward the goals that the federal government is trying to achieve. To that end, we have recommended, among other things, that Congress enact comprehensive, uniform, and risk-based food safety legislation and commission the National Academy of Sciences or a blue ribbon panel to conduct a detailed analysis of alternative organizational food safety structures. We also recommended that the executive branch reconvene the President’s Council on Food Safety to facilitate interagency coordination on food safety regulation and programs. These actions can begin to address the fragmentation in the federal oversight of food safety. Going forward, to build a sustained focus on the safety and the integrity of the nation’s food supply, Congress and the executive branch can integrate various expectations for food safety with congressional oversight and through agencies’ strategic planning processes. The development of a governmentwide performance plan that is mission-based, is results-oriented, and provides a cross-agency perspective offers a framework to help ensure agencies’ goals are complementary and mutually reinforcing. Further, this plan can help decision makers balance trade-offs and compare performance when resource allocation and restructuring decisions are made. As I have discussed, GAO designated the federal oversight of food safety as a high-risk area that is in need of a broad-based transformation to achieve greater economy, efficiency, effectiveness, accountability, and sustainability. The high-risk designation raises the priority and visibility of this necessary transformation and thus can bring needed attention to address the weaknesses caused by a fragmented system. GAO stands ready to provide professional, objective, fact-based, and nonpartisan information and thereby assist Congress as it faces tough choices on how to fundamentally reexamine and transform the government. Lasting solutions to high-risk problems offer the potential to save billions of dollars, dramatically improve service to the American public, strengthen public confidence and trust in the performance and accountability of our national government, and ensure the ability of government to deliver on its promises. Madam Chairwoman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Lisa Shames, Acting Director, Natural Resources and Environment at (202) 512-3841 or ShamesL@gao.gov. Key contributors to this statement were Erin Lansburgh, Bart Fischer, Alison O’Neill, and Beverly Peterson. Homeland Security: Management and Coordination Problems Increase the Vulnerability of U.S. Agriculture to Foreign Pests and Disease. GAO-06-644. Washington, D.C.: May 19, 2006. Oversight of Food Safety Activities: Federal Agencies Should Pursue Opportunities to Reduce Overlap and Better Leverage Resources. GAO-05- 213. Washington, D.C.: March 30, 2005. Food Safety: Experiences of Seven Countries in Consolidating Their Food Safety Systems. GAO-05-212. Washington, D.C.: February 22, 2005. Food Safety: USDA and FDA Need to Better Ensure Prompt and Complete Recalls of Potentially Unsafe Food. GAO-05-51. Washington, D.C.: October 6, 2004. Antibiotic Resistance: Federal Agencies Need to Better Focus Efforts to Address Risk to Humans from Antibiotic Use in Animals. GAO-04-490. Washington, D.C.: April 22, 2004. School Meal Program: Few Instances of Foodborne Outbreaks Reported, but Opportunities Exist to Enhance Outbreak Data and Food Safety Practices. GAO-03-530. Washington, D.C.: May 9, 2003. Food-Processing Security: Voluntary Efforts Are Under Way, but Federal Agencies Cannot Fully Assess Their Implementation. GAO-03-342. Washington, D.C.: February 14, 2003. Meat and Poultry: Better USDA Oversight and Enforcement of Safety Rules Needed to Reduce Risk of Foodborne Illnesses. GAO-02-902. Washington, D.C.: August 30, 2002. Genetically Modified Foods: Experts View Regimen of Safety Tests as Adequate, but FDA’s Evaluation Process Could Be Enhanced. GAO-02-566. Washington, D.C.: May 23, 2002. Food Safety: Improvements Needed in Overseeing the Safety of Dietary Supplements and “Functional Foods.” GAO/RCED-00-156. Washington, D.C.: July 11, 2000. his is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Each year, about 76 million people contract a foodborne illness in the United States; about 325,000 require hospitalization; and about 5,000 die. While the recent E. coli outbreaks highlighted the risks posed by accidental contamination, the attacks of September 11, 2001, heightened awareness that the food supply could also be vulnerable to deliberate contamination. This testimony focuses on the (1) role that GAO's high-risk series can play in raising the priority and visibility of the need to transform federal oversight of food safety, (2) fragmented nature of federal oversight of food safety, and (3) need to address federal oversight of food safety as a 21st century challenge. This work is based on previously issued reports. GAO's high-risk series is intended to raise the priority and visibility of government programs that are in need of broad-based transformation to achieve greater economy, efficiency, effectiveness, accountability, and sustainability. In January 2007, as part of our regular update of this series for each new Congress, GAO designated the federal oversight of food safety as a high-risk area for the first time. While this nation enjoys a plentiful and varied food supply that is generally considered to be safe, the federal oversight of food safety is fragmented, with 15 agencies collectively administering at least 30 laws related to food safety. The two primary agencies are the U.S. Department of Agriculture (USDA), which is responsible for the safety of meat, poultry, and processed egg products, and the Food and Drug Administration (FDA), which is responsible for other food. In many previous reports, GAO found that this fragmented system has caused inconsistent oversight, ineffective coordination, and inefficient use of resources. For example, existing statutes give agencies different regulatory and enforcement authorities. Under current law, thousands of USDA inspectors must examine all slaughtered carcasses and visit all processing facilities at least once during each operating day. However, federal law does not mandate the frequency of inspection for foods that are under FDA's jurisdiction. Food recalls are generally voluntary. While USDA and FDA provide guidance to companies for carrying out voluntary recalls, they do not know how promptly and completely companies carry out recalls and do not promptly verify that recalls have reached the entire distribution chain. In addition, they use procedures that may not be effective to alert consumers to a recall. Federal agencies are spending resources on overlapping food safety activities. USDA and FDA both inspect shipments of imported food at 18 U.S. ports of entry but do not share inspection resources at these ports. Integrating the fragmented federal food safety system is a significant challenge for the 21st century, particularly in light of the nation's current deficit and growing structural fiscal imbalance. To help Congress review and reconsider the base of federal spending, GAO framed illustrative questions for decision makers to consider in 21st Century Challenges: Reexamining the Base of the Federal Government. Among these questions are how agencies can integrate and share accountability for their activities on crosscutting issues and how they can adopt more innovative methods to contribute to the achievement of national outcomes. While framing these questions, GAO specifically cited the myriad of food safety programs managed across several federal agencies.
Approaches to electronic health information exchange have expanded in recent years with the increased adoption of EHRs and growth of HIE organizations. For example, some providers can electronically exchange clinical information via interoperable EHR systems. In cases in which providers wish to exchange electronic health information but do not have interoperable systems, HIE organizations can serve as key facilitators of exchange by providing for data connections among stakeholders, including laboratories, public health departments, hospitals, and physicians. Specifically, the use of EHR technology and the use of HIE organizations can allow providers to request and receive information about patients from other providers’ records, such as medication lists, laboratory results, or previous diagnoses and hospitalizations. For example, when a provider requests information, the HIE organization may be able to identify the sources of the requested data and initiate the electronic transmission that delivers the data from another provider’s EHR in a format that can be accepted and processed by the receiving provider’s EHR. Examples of exchange activities that can occur using EHR technology directly between providers or through an HIE organization are shown in figure 1. According to an April 2012 article, exchanging EHR information with other entities can be significantly more difficult for a provider than using EHRs to manage health information within the provider’s organization only— without exchanging the information with others. Appendix I provides information about the extent to which providers are able to electronically exchange health information, as reported by providers and stakeholders we interviewed. HITECH provided funding for various activities, including the Medicare and Medicaid EHR programs. These programs are intended to help increase the meaningful use of EHR technology by providing incentive payments for, and later imposing penalties on, providers—that is, certain hospitals and health care professionals such as physicians—who participate in Medicare or Medicaid. These programs are the largest of the activities, in terms of potential federal expenditures, funded by HITECH. Within HHS, CMS and ONC have developed the programs’ requirements. As the programs progress through these stages, Stage 1, which began in 2011, set the basic functionalities EHRs must include, such as capturing data electronically and providing patients with electronic copies of health information. CMS and ONC indicated that Stage 1 allowed providers to test the capability of their EHRs to electronically exchange health information. Stage 2, which began in 2014, added requirements such as increased health information exchange between providers to improve care coordination for patients. For example, Stage 2 will require hospitals and professionals to provide an electronic summary of care document for each transition of care or referral to another provider, whereas in Stage 1 this measure was optional. Stage 3, which is scheduled to go into effect in 2017, will continue to expand on meaningful use to improve health care outcomes and the exchange of health information, according to CMS and ONC. The requirements for this stage have not yet been developed. ONC is responsible for identifying health data standards and technical specifications for EHR technology and establishing and overseeing the certification of EHR technology. As part of the EHR programs, providers must report annually on certain mandatory meaningful use measures and on additional measures that they may choose from a menu of measures. Appendix II describes those Stage 1 and Stage 2 meaningful use measures that CMS and ONC reported as specifically relating to health information exchange. Providers and stakeholders we interviewed cited key challenges to electronic health information exchange; in particular, they cited issues related to insufficient standards, concerns about how privacy rules can vary among states, difficulties in matching patients to their records, and costs associated with electronic health information exchange. CMS and ONC officials noted that they have several ongoing programs and initiatives to help address some aspects of these key challenges, but concerns in these areas continue to exist. Reported insufficiencies in standards for electronic health information exchange. While standards for electronically exchanging information within the EHR programs exist, providers reported that standards may not be sufficient in some areas. Information that is electronically exchanged from one provider to another must adhere to the same standards in order to be interpreted and used in EHRs, thereby permitting interoperability. Several providers stated that they often have difficulty exchanging certain types of health information with other providers that have a different EHR system due to a lack of sufficient standards to support exchange. One area for which providers told us standards were insufficient relates to standards for allergies. Specifically, one provider noted that there are not sufficient standards to define allergic reactions, and another provider explained that some EHR systems classify an allergic reaction as a side effect, while other EHR systems classify the same reaction as an allergy. Such differences can cause confusion when health information is exchanged among providers because providers who receive information may have difficulty locating or using information on allergies if their EHR systems classify the information differently than the EHR systems of the providers who sent the information. Similarly, an article from the Journal of the American Medical Informatics Association stated that the proper terminology for encoding patients’ allergies is complex and that some gaps still exist across existing standards. Providers who participated in the EHR programs from fiscal year 2011 through fiscal year 2013 could use certified EHR technology that conformed to the 2011 edition of the standards and certification criteria. All providers that participate in the EHR Programs in fiscal year 2014 must conform to the 2014 edition of the standards and certification criteria. ONC is expected to develop another set of standards and certification criteria that certified EHR technology would be required to conform to beginning in 2016. from one entity to another. HHS expects that providers using the 2014 edition will have greater ability to exchange information. Patient encounters are commonly recorded in free-form text narratives, or as unstructured clinical data. While free-form patient narratives give the provider flexibility to note observations that are not supported by structured data, they are not easily searchable and aggregated and can be more difficult to analyze. that RxNorm and the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) should be expanded for nonmedication allergies and allergic reactions. Fifth, several providers and stakeholders commented that the Direct Protocol allows for limited exchange, such as exchanging a secure email message, rather than enabling certain other functionalities, such as the ability to query another EHR system. Reported variation in state privacy rules and lack of clarity about requirements. Some providers noted that exchanging health information with providers in other states can be difficult due to their limited understanding of variations in privacy rules from state to state. Some providers also noted that exchange can be especially difficult in cases when providers are located close to state borders and therefore serve patients from another state. Providers that are covered by the Health Insurance Portability and Accountability Act (HIPAA) of 1996 must adhere to federal privacy rules and can also be subject to state privacy rules. These state rules can be more stringent than HIPAA requirements or standards. HIPAA’s Administrative Simplification Provisions required the establishment of, among other things, national privacy standards. Pub. L. No. 104-191, Title II, Subtitle F, 110 Stat. 1936, 2021 (codified at 42 U.S.C. §§ 1320d–1320d-8). These provisions also expressly provided that such national standards would not preempt state laws that impose requirements, standards, or implementation specifications that are more stringent than those imposed under federal regulation. Pub. L. No. 104-191, Title II, Subtitle F, 110 Stat. 1936, 2021, see 42 U.S.C. §§ 1320d–2 notes. HIPAA regulates covered entities’ (including most health care providers’) use and disclosure of personal health information. The Privacy Rule generally permits the use or disclosure of an individual’s protected health information without the individual’s written authorization for purposes of treatment, payment and health care operations. Under the Privacy Rule, more stringent state laws that are not preempted by federal law include those that prohibit or restrict a use or disclosure in circumstances under which such use or disclosure would be permitted under HIPAA. See 45 C.F.R. 160.202. To address privacy issues related to electronic health information exchange, ONC officials have several ongoing efforts. For example, ONC has issued high-level guidance for providers on how to ensure the privacy and security of health information covering a wide range of topics related to meaningful use and the HIPAA Privacy and Security Rules, among other things. Regarding state privacy laws, this guidance suggests that providers seek information from state agencies, RECs, and professional associations to understand how state laws affect the sharing of patient health information. In addition, ONC began the Data Segmentation for Privacy Initiative to develop and pilot test standards for managing patient consents and data segmentation. As part of this initiative, ONC released an implementation guide for consent management and data segmentation in the summer of 2012, and the agency is currently pilot testing this guide. In addition, ONC’s state HIE organization program is currently receiving reports from states on how they are implementing their state’s privacy rules. Officials expect to receive the information from states by March 2014. ONC officials are hopeful that these efforts will help address privacy concerns and, as a result, facilitate exchange efforts for providers. Although ONC is working on privacy issues, some providers we spoke with reported that lack of clarity in state privacy laws is one reason that they have experienced difficulty exchanging health information with providers in other states. They found it difficult to ensure they were compliant with state laws when exchanging certain personal health information with providers in another state. For example, some providers in Minnesota and Massachusetts noted that some state laws have stringent requirements related to sharing health information related to mental health, or human immunodeficiency virus or other sexually transmitted infections. In addition, some providers told us that different providers in their state have different interpretations regarding how frequently they must obtain consent from the patient, as required under the state privacy rule, for the exchange of patients’ health information. For example, some providers may interpret the state privacy rule to mean that every time a patient’s health information is exchanged with another provider they have to obtain consent. Other providers in the same state may interpret the state privacy rule to mean that they have to obtain consent only once. In addition to the privacy challenges identified by providers, stakeholders responding to HHS’s March 2013 RFI also identified privacy as a challenge related to health information exchange, and noted that additional training for providers on varying state privacy laws is needed to address this challenge. Stakeholders also suggested that HHS could focus more resources on consent policies and recommended that HHS undertake additional work to facilitate (1) electronically obtaining patient consent for disclosing health information, and (2) communicating that consent along with the related health information. Reported difficulty of accurately matching patients to their health records. Some providers we interviewed reported that they do not have an accurate and efficient way to match patients to their records when exchanging health information. Multiple providers and stakeholders cited situations in which several of their patients are listed with the same name and birth year, and live in the same area. Two of these providers reported that patients can be matched to the wrong set of records, and that providers often need to manually match records, which is time- consuming. Some stakeholders also noted similar problems, including safety concerns from incorrect patient matching. HHS programs or initiatives to address patient matching issues related to health information exchange include both a patient matching project and efforts by two federal advisory committees. According to ONC officials, planning for the Patient Matching Initiative was begun by the State Health Information Exchange Cooperative Agreement Program in July 2013, and the project launched publicly in September 2013. The goals of the initiative are to (1) improve patient matching based on an assessment of current approaches used by selected stakeholders, (2) identify key attributes and algorithms for matching patients to their records, and (3) define processes or best practices to support the identified key attributes. The first phase of the initiative was completed in February 2014 with the release of a report containing patient matching recommendations for possible inclusion in Stage 3 of the EHR programs The two and the 2015 edition of the standards and certification criteria.federal advisory committees established under HITECH, the HIT Policy Committee and the HIT Standards Committee, made recommendations to HHS in 2011 that relate to patient matching.recommended standardized formats for demographic data fields, internally evaluating matching accuracy, accountability, developing, promoting and disseminating best practices, and supporting the role of the patient. The HIT Standards Committee made four recommendations on patient matching covering patient attributes that could be used, data quality issues, formats for data elements, and the data that could be returned from a match request. According to ONC officials, as of July 2013 ONC had efforts under way to respond to these recommendations, under the Patient Matching Initiative, in coordination with the committees. For example, to address one recommendation related to developing, promoting, and disseminating best practices, ONC officials said that they plan to determine which approaches to patient matching work best and develop guidance to help organizations implement such steps. Although HHS has ongoing efforts to address the patient matching challenge, several providers and stakeholders commented that more work needs to be done on this issue. Some providers we interviewed use different methodologies, such as algorithms that make use of multiple patient attributes for identifying patients. However, providers told us that they still have challenges matching patients to their records. Several providers and stakeholders have stated that there should be a national patient identifier for matching patients to their records.stakeholders who responded to HHS’s March 2013 RFI stated that HHS has an opportunity to reduce the potential risks of engaging in exchange by focusing more resources on patient matching. Some Reported challenges with cost of exchanging health information. Providers we interviewed reported challenges covering costs associated with health information exchange, including upfront costs associated with purchasing and implementing EHR systems, fees for participation in state or local HIE organizations, and per-transaction fees for exchanging health information charged by some vendors or HIE organizations. Several providers said that they must invest in additional capabilities such as establishing interfaces for exchange with laboratories or other entities such as HIE organizations. For example, many providers told us that the cost of developing, implementing, and maintaining interfaces with others to exchange health information is a significant barrier. One provider and several officials estimated various amounts between $50,000 and $80,000 that providers spend to establish data exchange interfaces. Other stakeholders we interviewed or who responded to HHS’s March 2013 RFI also identified costs associated with participation in HIE organizations and maintaining EHR systems as a challenge for providers. To address costs of exchanging health information, ONC’s State Health Information Exchange Cooperative Agreement Program has provided funding to HIE organizations. Agency officials stated that by funding HIE organizations, a relatively low cost option can be made available for providers to use to exchange health information. However, ONC officials said that this program is scheduled to end in March 2014. In addition, several providers we interviewed told us that for them the benefits to them of joining an HIE organization often do not exceed the costs, in some cases because few providers have joined their state or regional HIE organizations, resulting in limited opportunities to exchange health information. Some providers told us they do not participate in HIE organizations because they exchange information in other ways that they believe are more efficient, such as exchanging directly with other providers that use the same EHR system from the same vendor. One study noted that most health care providers, including over 65 percent of hospitals and 90 percent of physician practices, were not participating in HIE organizations. HHS payments to providers under the EHR programs can help support the cost of exchange, but providers can participate in the programs without routinely exchanging information electronically that could lead to improved care. While some of the meaningful use requirements for Stage 1 and Stage 2 help to facilitate the exchange of health information, they require exchange only under certain circumstances. (See app. II for more information.) For example, one part of the requirement to provide a summary care document for each transition of care or referral in Stage 2 compels providers to complete either (1) one successful electronic exchange of a summary of care record with a recipient using technology designed by an EHR developer other than the sender’s, or (2) one successful test with CMS’s test EHR during the reporting period. stakeholder we spoke with explained that for this part of the requirement some providers just complete one successful test with CMS’s test EHR and do not routinely demonstrate exchanging health information electronically with other EHR systems. HHS officials stated that Stage 2 is an incremental step toward advancing exchange, and that providers generally do not yet have the technology to enable greater exchange. The requirement to provide a summary of care document for each transition of care or referral in Stage 2 also requires eligible professionals and hospitals to provide summary of care documents for more than 10 percent of transitions of care and referrals either (1) electronically transmitted using certified EHR technology or (2) through an exchange with an organization that is a Nationwide Health Information Network Exchange participant or in a way that is consistent with the Nationwide Health Information Network. The Nationwide Health Information Network was a program funded by ONC that transitioned to the eHealth Exchange, a group of federal agencies and nonfederal organizations whose mission, among other things, is to improve public health reporting through secure, trusted, and interoperable health information exchange. CMS and ONC have identified a minimum set of technical capabilities that are required for an EHR to be considered a test EHR. Eligible professionals and hospitals that select to attest to this requirement will be randomly matched with a designated test EHR that is designed by an EHR developer other than the sender’s. HHS, including CMS and ONC, developed and issued a strategy document in August 2013 that describes how it expects to advance electronic health information exchange, with principles to guide future actions in three broad areas—accelerating health information exchange, advancing standards and interoperability, and patient engagement. Examples of principles in the strategy include (1) working with multiple stakeholders to develop standards and facilitating the adoption and use of standards among federal agencies; (2) supporting the privacy, security, and integrity of patient health information across health information exchange activities; (3) seeking to enable a patient’s health information to be available wherever the patient accesses care, to support patient- centered care delivery; and (4) supporting exchange through state-led efforts to reduce costs to providers. (See app. III for a complete list of principles.) According to the strategy, these principles have the potential to address the key health information challenges identified by providers and stakeholders we interviewed, which relate to standards, patients’ privacy, matching patients with data, and costs. GAO-04-408T. See Pub L. No. 103-62, 107 Stat. 285 (1993) (GPRA), as amended by Pub. L. No. 111- 352, 124 Stat. 3866 (2011) (GPRAMA). GPRA requires, among other things, that federal agencies develop strategic plans that include agencywide goals and strategies for achieving those goals. We have reported that these requirements also can serve as leading practices for planning at lower levels within federal agencies, such as individual programs or initiatives. programs and to determine whether adjustments need to be made in order to maintain progress within given time frames. Below are examples of how the lack of these elements affects the HHS strategy. Specific Actions. While the strategy mentions that HHS seeks to enable a patient’s health information to be available wherever the patient accesses care, it does not indicate specific actions that HHS will take to implement that principle or how those actions would overcome exchange-related challenges. Including specific actions could enhance the strategy’s usefulness for helping to make program management decisions. Prioritized Actions. While the HHS strategy states that HHS will continue to evaluate short- and long-term steps to advance exchange, it does not clearly delineate how future actions related to the principles should be prioritized. Prioritizing actions can help HHS ensure that the most appropriate activities are completed first, to more efficiently achieve the goal of advancing exchange. Milestones. The HHS strategy does not provide milestones with specific time frames to help the agencies gauge their progress in advancing exchange. Exchange-related milestones with specified time frames could be particularly useful because they could provide a framework for determining whether any actions HHS intends to take could help lead to progress in addressing the challenges providers face related to exchange. Milestones with time frames could also set realistic expectations so stakeholders can anticipate when they can expect to see actions to advance exchange. CMS and ONC officials acknowledged the importance of providers being able to exchange health information effectively by Stage 3 of the EHR programs to allow for improved outcomes such as quality, efficiency, and patient safety. Determining specific, prioritized actions and exchange- related milestones with specified time frames can help to ensure that the agencies’ principles and future actions result in timely improvements in addressing the key exchange-related challenges reported by providers and stakeholders, which are particularly important because planning for Stage 3 is expected to begin as soon as 2014. This information could also help HHS prioritize its future actions based on whether health information is being exchanged effectively among providers, in order to better achieve the EHR programs’ ultimate goals of improving quality, efficiency, and patient safety. HHS and providers have made some progress toward addressing challenges reported by providers and others related to the electronic exchange of health information, but these challenges are complex and difficult to address and are likely to continue to persist. Some of HHS’s most important efforts, such as designing the 2014 edition of the standards and certification criteria to include an increased exchange capability in EHR systems, may lead to greater exchange over the next year. In addition, exchange may increase as providers modify their systems to meet more stringent exchange-related requirements in Stage 2 of the EHR programs. However, a number of remaining challenges make these outcomes uncertain. HHS has both ongoing programs and future plans to address concerns about exchange, but it is not always clear how HHS will effectively prioritize and implement its potential responses to the challenges of exchange. Specifically, the HHS strategy to advance electronic health information exchange does not identify specific actions that CMS and ONC expect will lead to increased exchange, prioritize these actions, or include milestones for gauging progress over time. Guidance on planning and implementing effective strategies highlights the importance of key elements, such as specific, prioritized actions and milestones for gauging progress. These elements could help the agencies make future adjustments based on the effectiveness of their efforts. Exchange is especially important because of its potential to help improve coordination of care within the fragmented health care system. According to CMS and ONC officials, ensuring progress in providers’ ability to electronically exchange information is critical for the effective implementation of the EHR programs. Without a sufficient focus on exchange—including specific, prioritized actions with milestones and time frames—CMS and ONC run the risk that the desired outcomes of the EHR programs of improved quality, efficiency, and patient safety will be compromised. To address challenges that affect the ability of providers to electronically exchange health information, we recommend that the Secretary of Health and Human Services direct CMS and ONC to take the following two actions: develop and prioritize specific actions that HHS will take consistent with the principles in HHS’s strategy to advance health information exchange; and develop milestones with time frames for the actions to better gauge progress toward advancing exchange, with appropriate adjustments over time. We provided a draft of this report to HHS for comment. HHS provided written comments, which are reprinted in appendix IV. HHS concurred with our recommendations. For the first recommendation, HHS (including CMS and ONC) stated that it has begun to develop and prioritize specific action items, consistent with the principles in its strategy to advance health information exchange, and that it has begun to take action on some of the prioritized items. For the second recommendation, HHS (including CMS and ONC) stated that it has begun developing milestones with time frames for the actions to better gauge progress toward advancing exchange. In general, HHS’s comments also reiterated that the electronic exchange of health information is a key element of meaningful use and ultimately will be critical for the success of health care delivery system reforms under the Patient Protection and Affordable Care Act. HHS also stated that it has begun to take definitive steps to accelerate exchange through policy guidance, grant funding to states, and development of standards and certification, such as collaborating with private sector organizations that develop health IT standards to fill key gaps in standards to better support information exchange during transitions in care and when coordinating care across providers. Additionally, HHS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services, the Administrator of CMS, the National Coordinator for Health Information Technology, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at kohnl@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix V. This appendix provides additional information reported from providers we spoke with about health information exchange and its related benefits. We conducted a total of 25 interviews with providers and stakeholders, such as regional extension centers (REC) and health information exchange organizations (HIE organization), in four states—Georgia, Massachusetts, Minnesota, and North Carolina. We interviewed staff from at least two hospitals or health systems and at least one physician office or group practice in each state. We selected the four states because they were mentioned during interviews with officials from HHS and relevant stakeholders as having ongoing efforts related to health information exchange. We asked interviewees about what types of patient health information providers are currently able to electronically exchange, the methods used to exchange such information, and the benefits providers have realized or foresee from such exchange. Providers we interviewed reported that the most critical types of health information that they need to be able to electronically exchange include patient allergy information, medication lists, and problem lists. However, providers generally reported being able to electronically exchange only specific types of health information at this time, including lab orders and results, immunization and prescription information, and certain clinical documents. For example, Almost all the providers we interviewed reported some exchange of lab information. In most cases, such exchanges involved both the submission of lab orders and the receipt of lab results via interfaces designed for exchange between providers and labs or through their electronic health record (EHR) system. While these exchanges were reported to generally occur between providers and laboratories outside their organizations, two providers noted that such capabilities were still limited to sharing lab information with others in the same health system. Some providers also reported electronically exchanging some information with state public health departments, generally immunization data and notification of certain infectious diseases. They said that these electronic exchanges were generally limited to submissions to the departments and did not include receipt of data from these departments. Several of the providers we interviewed said they engaged in e- prescribing activities, which in some instances included both the submission of electronic prescriptions to pharmacies and the receipt of medication information from pharmacies. However, some providers noted that such exchanges could take place only if the pharmacy had a compatible e-prescribing system that could electronically receive prescription information from the provider’s EHR system. In the absence of compatible systems, faxes were used. Several providers we interviewed also noted that they could exchange continuity of care documents (CCD) with other providers in their organization, although the exchange of this type of information varied among the providers we interviewed. Several providers said they could exchange CCDs within their health system, whereas other providers said they could exchange this information only with providers using the same EHR vendor. Providers in all four states and stakeholders that we interviewed reported that, at this time, methods used to electronically exchange health information are limited to use within health systems, use between certain EHR systems, or use of the Direct Protocol. For example, In Georgia, REC officials and the four providers we spoke with told us that electronic exchange is generally occurring only within health systems and among those affiliated providers that work in the health systems. Some providers noted that they could electronically exchange lab orders and results outside their organizations, but one provider noted that even this information was still exchanged electronically only within its hospital. Providers in Minnesota, Massachusetts, and North Carolina reported that they used the same EHR system from the same vendor and were able to electronically exchange all patient clinical information with any other entity using that vendor via an interoperability feature. According to these providers, this interoperability feature provides a mechanism for them to electronically exchange all types of clinical information about their patients. A community-based hospital in Minnesota reported using a different EHR system than was used by the other, larger health systems in the community it shared information with. This provider reported relying on the Direct Protocol to electronically exchange some limited health information with other providers in the region. A provider in Massachusetts noted that it was building web-based “view portals” to allow other providers outside its health system to view health information electronically in order to help coordinate patient care. Providers that participated in an HIE organization reported being able to electronically exchange health information with other providers. Others have opted to electronically exchange information using their EHR technology rather than an HIE organization, even if one was available. In Massachusetts, some providers told us that they are able to directly connect to the state’s HIE organization in order to electronically exchange health information, such as CCDs. However, not all providers in the state are electronically exchanging information at this time. A Massachusetts law calls for the creation and maintenance of a state HIE organization that allows providers in all health care settings to exchange patient health information with other providers by the end of 2016. Some providers we spoke with in Minnesota said they had no plans to join any of the HIE organizations available in the state at this time due to the limited benefits they would realize from participating, and would instead continue to rely on their EHR technology to electronically exchange health information with other providers that use the same vendor. Some providers noted that without a sufficient number of other providers participating in an HIE organization, it would be of limited value. Several Georgia and North Carolina providers reported that the availability of an HIE organization could help facilitate electronic exchange among providers. Entities in both states are establishing regional HIE organizations that will ultimately connect to one another via a statewide HIE organization. Providers in both states said they expected that the HIE organizations, once established, would facilitate broader electronic exchange of health information throughout the state. Although providers we interviewed described certain circumstances when they could electronically exchange health information, they indicated that they would like to expand the electronic exchange of health information and cited a variety of benefits related to such electronic exchange. For example, some providers noted that electronic exchange can provide access to critical information needed when administering medical care, thus improving care quality and reducing duplicative testing; improve access to information related to a patient’s health history, including medication histories and previous diagnoses; result in more timely access to information, which is particularly helpful in emergency departments; and reduce labor-intensive efforts to send and receive health information in paper form, such as a printed document, or conduct public health reporting activities. This appendix provides information on the Stage 1 and Stage 2 meaningful use measures related to electronic health information exchange, according to officials from the Centers for Medicare & Medicaid Services (CMS) and the Office of the National Coordinator for Health Information Technology (ONC). According to these officials, Stage 2, which began in 2014, provides additional requirements related to the exchange of health information. For example, some meaningful use measures related to health information exchange that providers could select from a menu of optional measures in Stage 1 are mandatory for Stage 2. In addition, some Stage 2 measures are new. For example, the measure “provide structured electronic lab results to ambulatory providers” is a new measure for hospitals in Stage 2. See table 1 for more information. This appendix provides information on the principles that the Centers for Medicare & Medicaid Services (CMS) and the Office of the National Coordinator for Health Information Technology (ONC) plan to use to guide their future actions to facilitate health information exchange. These principles are outlined in a strategy that the agencies released in August 2013 to describe how they expect the principles to lead to future actions that have the potential to address the key challenges providers and stakeholders have identified relative to electronic health information exchange in four areas—standards, patients’ privacy, matching patients with data, and costs. The strategy includes principles under three broad categories—accelerating health information exchange, advancing standards and interoperability, and patient engagement. See table 2 for more information. In addition to the contact named above, Will Simerl, Assistant Director; La Sherri Bush; Thomas Murphy; Monica Perez-Nelson; Roseanne Price; Andrea Richardson; Teresa Tucker; and Rebecca Rust Williamson made key contributions to this report.
The Health Information Technology for Economic and Clinical Health Act (HITECH) promotes the use of health information technology and identifies the importance of health information exchange. It provides incentive payments to promote the widespread adoption and meaningful use of EHR technology. To be a meaningful user, providers are to demonstrate, among other things, that their certified EHR technology can electronically exchange health information. GAO examined (1) the key challenges to the electronic exchange of health information, if any, that have been reported by providers and stakeholders, and HHS's ongoing efforts to address them, and (2) the extent to which HHS has planned future actions to address those key challenges. GAO reviewed HHS documentation; interviewed HHS officials; and interviewed providers—hospital officials and physicians—and relevant stakeholders about their experiences. Providers and stakeholders GAO interviewed in four states with ongoing electronic health information exchange efforts cited key challenges to exchange, in particular, issues related to insufficient standards, concerns about how privacy rules can vary among states, difficulties in matching patients to their records, and costs associated with exchange. Officials from the Centers for Medicare & Medicaid Services (CMS) and the Office of the National Coordinator for Health Information Technology (ONC)—agencies within the Department of Health and Human Services (HHS)—noted that they have several ongoing programs and initiatives to help address some aspects of these key challenges, but concerns in these areas continue to exist. For example, several providers GAO interviewed said that they have difficulty exchanging certain types of health information due to insufficient health data standards. Although HHS has begun to address insufficiencies in standards through its Medicare and Medicaid Electronic Health Record (EHR) programs, such as through the introduction of new 2014 standards for certified EHR technology, it is unclear whether its efforts will lead to widespread improvements in electronic health information exchange. In addition, providers GAO interviewed reported challenges covering costs associated with electronic exchange, such as upfront costs associated with purchasing and implementing EHR systems. While HHS is working to address this challenge through various efforts, including a program that helps fund health information exchange organizations—organizations that provide support to facilitate the electronic exchange of health information—some providers told GAO they do not participate in these organizations because they see limited opportunities for exchanging information through them. HHS, including CMS and ONC, developed and issued a strategy document in August 2013 that describes how it expects to advance electronic health information exchange. The strategy identifies principles intended to guide future actions to address the key challenges that providers and stakeholders have identified. However, the HHS strategy does not specify any such actions, how any actions should be prioritized, what milestones the actions need to achieve, or when these milestones need to be accomplished. GAO's prior work, consistent with the Government Performance and Results Act Modernization Act of 2010 (GPRAMA), sets forth several key elements of strategies that can guide agencies in planning and implementing an effective government program. As noted in GAO's prior work, elements such as specific actions, priorities, and milestones are desirable for evaluating progress, achieving results in specified time frames, and ensuring effective oversight and accountability. Determining specific actions and exchange-related milestones with specified time frames can help to ensure that the agencies' principles and future actions result in timely improvements in addressing the key challenges reported by providers and stakeholders; this is particularly important because planning for Stage 3 of the EHR programs, which focuses on improving outcomes, is expected to begin as soon as 2014. This information could also help CMS and ONC prioritize their future actions based on whether health information is being exchanged effectively among providers, in order to better achieve the EHR programs' ultimate goals of improving quality, efficiency, and patient safety. GAO recommends that CMS and ONC (1) develop and prioritize specific actions that HHS will take consistent with the principles in HHS's strategy to advance health information exchange, and (2) develop milestones with time frames for the actions to better gauge progress toward advancing exchange, with appropriate adjustments over time. In commenting on the draft report, HHS, including CMS and ONC, concurred with these recommendations.
Roughly half of all workers participate in an employer-sponsored retirement, or pension, plan. Private sector pension plans are classified either as defined benefit or as defined contribution plans. Defined benefit plans promise to provide, generally, a fixed level of monthly retirement income that is based on salary, years of service, and age at retirement regardless of how the plan’s investments perform. In contrast, benefits from defined contribution plans are based on the contributions to and the performance of the investments in individual accounts, which may fluctuate in value. Examples of defined contribution plans include 401(k) plans, employee stock ownership plans, and profit-sharing plans. Over the past two decades, there has been a noticeable shift by employers away from defined benefit plans to defined contribution plans. The most dominant and fastest growing defined contribution plans are 401(k) plans, which allow workers to choose to contribute a portion of their pre-tax compensation to the plan under section 401(k) of the Internal Revenue Code. The use of 401(k) plans accelerated in the 1980s after the Treasury issued a ruling clarifying a new section of the tax code that allowed employers and employees to make pre-tax contributions, up to certain limits, to employees’ individual accounts. According to the most recent data from Labor, most 401(k) plans are participant-directed, meaning that a participant makes investment decisions about his or her own retirement plan contributions. About 87 percent of all 401(k) plans—covering 92 percent of all 401(k) plan participants and 91 percent of all 401(k) plan assets—generally allow participants to choose how much to invest, within federal limits, and to select from a menu of diversified investment options selected by the employer sponsoring the plan, such as an assortment of mutual funds that include a mix of stocks, bonds, or money market investments. Equity funds accounted for nearly half of the 401(k) plan assets at the close of 2005. Equity funds are investment options that invest primarily in stocks, such as mutual funds, bank collective funds, life insurance separate accounts, and certain pooled investment products (see fig. 1). Other plan assets were invested in company stock; stable value funds, including guaranteed investment contracts; balanced funds; bond funds; and money funds. As participants accrue earnings on their investments, they also pay a number of fees, including expenses, commissions, or other charges associated with operating a 401(k) plan. Over the course of the employee’s career, fees may significantly decrease retirement savings. For example, a 1-percentage point difference in fees can significantly reduce the amount of money saved for retirement. Assume an employee of 45 years of age with 20 years until retirement changes employers and leaves $20,000 in a 401(k) account until retirement. If the average annual net return is 6.5 percent—a 7 percent investment return minus a 0.5 percent charge for fees—the $20,000 will grow to about $70,500 at retirement. However, if fees are instead 1.5 percent annually, the average net return is reduced to 5.5 percent, and the $20,000 will grow to only about $58,400. The additional 1 percent annual charge for fees would reduce the account balance at retirement by about 17 percent. Fees are charged by the various outside companies that the plan sponsor—often the employer offering the 401(k) plan—hires to provide a number of services necessary to operate the plan. Services can include investment management (i.e., selecting and managing the securities included in a mutual fund); consulting and providing financial advice (i.e., selecting vendors for investment options or other services); record keeping (i.e., tracking individual account contributions); custodial or trustee services for plan assets (i.e., holding the plan assets in a bank); and telephone or Web-based customer services for participants. As shown in figures 2 and 3, generally there are two ways to provide services: “bundled” (the sponsor hires one company that provides the full range of services directly or through subcontracts) and “unbundled” (the sponsor uses a combination of service providers). Labor’s Employee Benefits Security Administration (EBSA) oversees 401(k) plans—including the fees associated with running the plans— because they are considered employee benefit plans under ERISA. Enacted before 401(k) plans came into wide use, ERISA establishes the responsibilities of employee benefit plan decision makers and the requirements for disclosing and reporting plan fees. Typically, the plan sponsor is a fiduciary. A plan fiduciary includes a person who has discretionary control or authority over the management or administration of the plan, including the plan’s assets. ERISA requires that plan sponsors responsible for managing employee benefit plans carry out their responsibilities prudently and do so solely in the interest of the plan’s participants and beneficiaries. The law also provides Labor with oversight authority of pension plans. However, the specific investment products commonly contained in pension plans—such as company stock, mutual funds, collective investment funds, and group annuity contracts—fall under the authority of the applicable securities, banking, or insurance regulators. The SEC, among other responsibilities, regulates registered securities including company stock and mutual funds under securities law. The federal agencies charged with oversight of banks—primarily FRB, OCC, and FDIC—regulate bank investment products, such as collective investment funds. State agencies generally regulate insurance products, such as variable annuity contracts. Such investment products may also include one or more insurance elements, which are not present in other investment options. Generally, these elements include an annuity feature, interest and expense guarantees, and any death benefit provided during the term of the contract. An investment company, bank, or insurance company that is a service provider to a 401(k) plan may offer any or all of these types of investment products as plan options. Investment fees—which are charged by companies that manage mutual funds or other investment products for all services related to operating the fund—comprise the majority of fees in 401(k) plans and are typically borne by participants. Plan record-keeping fees generally account for the next largest portion of plan fees. These fees cover the cost of various administrative activities carried out to maintain participant accounts. Participants typically pay for investment fees, which are usually based on assets in their accounts. Although plan sponsors often pay for record- keeping fees, participants bear them in an increasing number of plans. Investment fees and plan record-keeping fees comprise the vast majority of total plan fees. Investment fees are, for example, fees charged by companies that manage a mutual fund for all services related to operating the fund. These fees pay for: selecting a mutual fund’s portfolio of securities and managing the fund; marketing the fund and compensating brokers who sell the fund; and providing other shareholder services, such as distributing the fund prospectus. These fees are charged regardless of whether the mutual fund or other investment product, such as collective investment funds or group annuity contracts, is part of a 401(k) plan or purchased by individual investors in the retail market. As such, the fees are usually different for each investment option available to participants in a 401(k) plan. Investment fees account for the majority of 401(k) plan fees regardless of plan size. For example, as figure 4 illustrates, a 2005 industry survey estimated that investment fees accounted for 84.5 percent of total fees in plans with 25 members and for 98.6 percent of total fees in plans with 2,000 participants. Since investment fees account for the bulk of plan fees, several investment consultants we interviewed encourage 401(k) plan sponsors to offer options such as institutional funds to lower fees. Institutional mutual funds resemble funds available in the retail market, but are typically only available to 401(k) plans with assets above a certain threshold, such as $1 million. Similarly, indexed funds have lower management fees than actively managed funds. These funds closely track a market performance indicator, such as the Standard & Poor’s 500, which largely eliminates expenditures associated with research, investment selection, and buying and selling. Plan record-keeping fees, which cover individual account maintenance for plan participants, generally constitute the second-largest portion of plan fees. Unlike investment fees, plan record-keeping fees apply to the entire 401(k) plan rather than the individual investment options. Plan record- keeping fees are usually charged by the service provider to set up and maintain the 401(k) plan. These fees cover a variety of activities such as enrolling plan participants, processing participant fund selections, preparing and mailing account statements, and other related administration activities. A 2005 industry survey of service providers estimated that plan record- keeping fees constituted 12 percent of total plan fees for plans with 25 participants. As shown in figure 5, these fees make up a smaller proportion of total plan fees in larger plans, indicating economies of scale. In addition to investment and record-keeping fees, there are a number of other fees charged to administer the plan as a whole, including trustee fees that are charged by an individual, bank, or trust company to securely maintain plan assets; audit fees that are imposed by a service provider in connection with the annual audit that is required of ERISA-covered plans with more than 100 participants; legal fees that are charged by an attorney or law firm to provide legal support for administrative activities, such as ensuring the plan is in compliance with ERISA or representing the plan in a divorce settlement; investment consulting fees that are charged by an advisor, often a pension consultant, hired to help the plan sponsor select funds for the plan and to monitor investments; and communication fees that cover the cost of educating participants about the plan. Communication services may include a meeting led by a service provider to introduce the plan to participants. Communication services may also include providing participants with access to toll-free phone services, Internet service, and ongoing educational seminars. These fees generally comprise a much smaller percentage of total plan fees than investment and plan record-keeping fees. Participants pay for the majority of investment fees and a greater number bear plan record-keeping fees. As shown in table 1, a 2005 industry survey of 401(k) plan sponsors found that plan participants paid investment fees in almost 62 percent of plans with 5,000 members or fewer. This arrangement was even more common in plans with over 5,000 members where participants bear investment fees in about 71 percent of all plans. For both size plans, plan sponsors and participants shared investment fees in about 10 percent to 12 percent of plans. Another industry survey of plan sponsors with 1,000 employees or more also found that plan participants paid investment fees in the majority of plans in 2005. Participants generally pay investment fees indirectly. The investment returns that participants receive reflect their share of the fund’s assets after investment fees and other expenses have been subtracted. Investment fees are reported as a percentage of the fund’s overall assets, also known as the expense ratio. The 2005 survey of plan sponsors also found that participants bear a plan’s record-keeping fees in about 50 percent of plans with 5,000 participants or more. Plan sponsors paid record-keeping fees in about 35 percent of these plans and share fees with participants in about 15 percent. The opposite is true for plans with fewer than 5,000 participants, where plan sponsors paid record-keeping fees in 58 percent of cases. However, many of the industry professionals whom we spoke with said plan participants bear a greater portion of these fees than they did in the past. According to these professionals, record-keeping fees have shifted to participants because companies changed the way they charge record-keeping fees and many plan sponsors wanted to reduce their share of plan fees. Originally, record- keeping fees were explicit fees billed to the plan sponsor. When the use of mutual funds in 401(k) plans started to grow, some funds were marketed to sponsors as having no record-keeping fees. In these cases, record keepers were compensated out of the investment funds’ operating expenses for their services, such as maintaining individual account records for its retail investors and consolidating participant requests to buy or sell shares. Some sponsors may have been unaware that record- keeping fees were taken out of participant assets. Others were aware that the fees were passed on to participants. Record-keeping fees may be charged as a percentage of assets, or based on the number of transactions or the number of participants in the plan. Some industry professionals told us that fees charged as a percentage of assets may not reflect actual costs to the service provider since the fees grow regardless of the level of service provided. The professionals said service providers may charge record-keeping fees to participants as a flat fee and as an asset-based fee in plans with low assets. An asset-based fee would not generate enough revenue to cover record-keeping fees in these plans, so a flat fee is added. As plan assets grow, the revenue generated by asset-based fees eventually covers plan record-keeping fees, and the flat fee may be dropped. Other industry professionals said record-keeping fees may vary if they are offered by insurance companies or banks that can use fee structures unique to their industry. The fee information that ERISA requires 401(k) plan sponsors to disclose is limited and does not provide participants with an easy comparison of investment options. All 401(k) plans are required to provide disclosures on plan operations, participant accounts, and the plan’s financial status. Although they often contain some information on fees, these documents are not required to disclose the fees borne by individual participants. Additional fee disclosures are required for certain—but not all—plans in which participants direct their investments. These disclosures are provided to participants in a piecemeal fashion and do not provide a simple way for participants to compare plan investment options and their fees. ERISA requires that plan sponsors provide all participants with a summary plan description, account statements, and the summary annual report, but these documents are not required to disclose information on fees borne by individual participants. Table 2 provides an overview of each of these disclosure documents, and the type of fee information that they may contain. These required documents apply to all 401(k) plans, including plans in which participants have no control over investment decisions. Additional fee disclosures are required for certain—but not all—plans in which participants direct their investments. ERISA requires disclosure of fee information to participants where plan sponsors seek liability protection from investment losses resulting from participants’ investment decisions. Such plans—known as 404(c) plans—are required to provide participants with, among other information, a description of the investment risk and historical performance of each investment option available in the plan and any associated transaction fees for buying or selling shares in these options. Upon request, 404(c) plans must also provide participants with, among other information, the expense ratio for each investment option. To meet certain 404(c) requirements, such plans distribute prospectuses or fund profiles. The prospectuses and fund profiles are not meant to be comprehensive for the entire 401(k) plan, but rather are relevant for individual investment options in the plan. According to the most recent Form 5500 data, 54 percent of 401(k) plans— representing 64 percent of 401(k) participants—classify themselves as 404(c). However, the data also show that 87 percent of 401(k) plans— representing 92 percent of 401(k) participants—direct their 401(k) investments. These data suggest that some participant-directed plans are not 404(c) and, thus, not required to disclose to participants certain fee information such as the expense ratio of each investment option. Plan sponsors may voluntarily provide participants with more information on fees than ERISA requires, according to plan practitioners. For example, plan sponsors that do not elect to be 404(c) often distribute prospectuses or fund profiles when employees become eligible for the plan, just as 404(c) sponsors do. Also, according to plan practitioners, plan sponsors are not required to provide record-keeping or other fee information in their account statements, although many do so. In addition, plan sponsors are not required to provide the investment fees for the investment options in the summary plan document, but they may provide this information as well. Still, absent requirements to do so, some plan sponsors may not identify all the fees participants pay. Participants may not be aware of the different fees that they pay, yet are responsible for directing their investments within the plan. According to industry professionals, participants can be unaware that they pay any fees for their 401(k) investments and are particularly unaware of investment fees that are typically not quantified on account statements. In a nationwide survey, more than 80 percent of 401(k) participants report not knowing how much they pay in fees. Some industry professionals said that making participants who direct their investments more aware of fees would help them make more informed investment decisions. Information on fees is disclosed to participants in a piecemeal way. In order to get a more complete picture of fees, participants must collect various documents over time. As shown in table 3, disclosure documents with fee information are generally provided to participants at different times. Some documents that contain fee information are provided to participants automatically, whereas others, such as prospectuses or fund profiles, may require that participants seek them out. According to industry professionals, participants may not know to seek such documents out. ERISA does not require that plan sponsors provide participants who are responsible for directing their investments with fee information that could assist them in comparing the plan’s investment options. To identify the fee information for comparing investment options, participants must sift through multiple documents that are not always disclosed to them automatically. For example, to piece together certain fees associated with a plan’s investment options, a participant often must collect multiple prospectuses or fund profiles. Furthermore, because ERISA does not require that these documents be provided automatically to all participants, some participants may need to request them but may not know to do so. According to industry professionals, some participants may be able to make comparisons across investment options by piecing together the fees that they pay, but doing so requires an awareness of fees that most participants do not have. Assessing fees across investment options can be difficult for participants because the data are typically not presented in a single document which facilitates comparison. Participants can use fees along with other information, such as risk and historical performance, to compare different investment options. In some cases, differences in fees across products can be explained by their investment focus or other features. For example, mutual funds with shares in international stock generally charge higher fees than mutual funds with shares in domestic stock because international funds generally incur additional investment management costs. Higher costs can also arise if an investment option has additional features. For example, a provider may charge an additional fee to include certain benefit features, such as providing the participant with an option to convert a 401(k) account balance into a retirement annuity. Industry associations have considered different ways to present comparative information about a plan’s investment options to participants in a single document, but the industry does not have a standard way of doing so. These associations have generally suggested annually providing key information—such as the investment objective, fees, and other key features—associated with each plan’s investment options in a table to help participants compare among them. Industry professionals suggested that comparing the expense ratio— a fund’s operating fees as a percentage of its assets—across investment options is the most effective way to compare options’ fees. The expense ratio can be used to compare investment options because it includes investment fees that account for most of the fees borne by participants and is generally the only fee measure that varies by option. Fund options with relatively high fees, such as actively managed funds, tend to have larger expense ratios than funds which are not actively managed. Also, fund options that are only available to institutional investors tend to have lower expense ratios than other types of funds. Most 401(k) investment options have expense ratios that can be compared, but this information is not always provided to participants. According to industry data, at least 69 percent of 401(k) assets are invested in options, such as mutual funds, that are generally required to present the expense ratio in a prospectus. Participants who do not belong to 404(c) plans are not required to receive prospectuses and therefore may not receive the expense ratio information. In addition, investment options besides mutual funds, such as guaranteed annuity contracts, may not be required to produce prospectuses that include expense ratios, but according to industry professionals, such options have expense ratio equivalents that investment industry professionals can identify. However, participants who do not receive this information cannot compare the investment options’ expense ratios. Because differences in fees can have large impacts on returns over time, industry professionals recommend considering expense ratios when making investment decisions. However, they point out that expense ratios should not be considered in isolation; rather, they should be considered in light of other important investment factors, such as risk and historical performance. Labor has authority under ERISA to oversee 401(k) plan fees and certain types of business arrangements involving service providers, but lacks the information it needs to provide effective oversight. Labor collects information on fees from plan sponsors, investigates participants’ complaints or referrals from other agencies on questionable 401(k) plan practices, and conducts outreach to educate plan sponsors about their responsibilities. However, the information reported to Labor does not identify all fees charged to 401(k) plans and therefore has limited use for effectively overseeing fees and identifying undisclosed business arrangements among consultants or other service providers. Certain business arrangements that are undisclosed may lead to participants paying higher fees for products or services that do not offer any additional value or benefit than other lower cost alternatives. Labor has several initiatives underway to improve the information it has on fees and the various business arrangements among service providers. Under ERISA, Labor is responsible for enforcing the requirements that plan sponsors (1) ensure that fees paid with plan assets are reasonable and for necessary services, (2) diversify the plan’s investments or provide a broad range of investment choices for participants, and (3) report information known on certain business arrangements involving service providers. Labor does this in a number of ways, including collecting information on fees from plan sponsors, investigating participants’ complaints or referrals from other agencies on questionable 401(k) plan practices, and conducting outreach to educate plan sponsors about their responsibilities. Labor collects information on fees charged to 401(k) plans primarily through the Form 5500. The form includes information on the plan’s sponsor, the features of the plan, and the number of participants. The form also provides more specific information, such as plan assets, liabilities, insurance, and financial transactions. Filing this form satisfies the requirement for the plan administrator to file annual reports concerning, among other things, the financial condition and operation of plans. Labor uses this form as a tool to monitor and enforce plan sponsors’ responsibilities under ERISA. The reporting form is not routinely provided to participants, but ERISA requires that it be made available upon request. Generally, information on 401(k) fees is reported on two sections of the Form 5500, Schedule A and Schedule C. Schedule A is used to report fees and commissions paid to brokers and sales agents for selling insurance products. Schedule C includes information on the fees paid directly or indirectly to service providers for all other investment products. Schedule C also identifies service providers with fees in excess of $5,000 by name. Labor officials told us that complaints from plan participants provide the most effective leads on plan sponsors’ violations of ERISA, but that Labor receives very few complaints related to excessive 401(k) plan fees. In fiscal year 2005, Labor received only 10 inquiries or complaints related to 401(k) fees. A Labor official told us that most plan participants likely do not understand much about plan fees and are thus unlikely to complain about them. In addition to responding to complaints, Labor also receives referrals from various entities, such as other federal agencies. For example, federal banking regulators like the Federal Reserve Board will review bank operations as part of their oversight and may uncover instances where a bank that provides services to a 401(k) plan is violating ERISA requirements. Several federal banking regulators have a written agreement to refer such cases to Labor, but Labor receives fewer than 100 referrals per year from these and other entities, such as state insurance and securities agencies. A Labor official told us that only one of the referrals that the agency has closed over the past 5 years was directly related to fees. Labor uses the Form 5500 in its investigations, but according to agency officials, this effort does not find many fee violations because it is difficult to identify unreasonable fees. Officials stated that they conduct few investigations based solely on the 401(k) fee information provided on Form 5500 but may review the fees charged to the plan as a part of investigations into other problems, such as not depositing participants’ contributions into their accounts in a timely manner. In addition, Labor may audit the Form 5500 to ensure that appropriate fees are disclosed. Labor officials told us that it is difficult to discern whether a fee is reasonable or not on its face, and therefore, investigators rarely initiate an investigation into a fee’s reasonableness. Plan fees can vary widely based on the types of services offered, and a “boutique” plan may have high fees but offer many services that a plan sponsor has determined are in the interest of the plan’s participants. In the rare instance that a fee appears egregious, Labor will generally enlist the services of a “fee expert” to make that determination, because according to one official, Labor is unable to do so itself. A fee expert will conduct a benchmarking study or request estimates from other service providers to get a sense of the market rate for certain services. Labor’s most recent in-depth review of fees identified some plans with high fees but determined that they were not unreasonable or in violation of ERISA. Labor last undertook a comprehensive review of 401(k) fees in 1997, in response to media, industry, and government concern that participants were potentially being charged excessive fees. According to a Labor official, 50 401(k) plans were investigated to analyze plans’ compliance with certain ERISA requirements related to fees, such as ensuring that fees charged to the plan are reasonable. The plans were selected based on various factors including anecdotal evidence of high fees and a listing in an industry journal of plans that had recently contracted with service providers. Labor found that the plan sponsors had complied with these ERISA requirements. In some cases, Labor did determine that participants were paying high fees. It referred these cases—which included insurance products and international equity funds—to a fee expert from academia for further analysis to determine if the fees were unreasonably high. The expert determined that the fees were high, but not unreasonable. Labor uncovered some violations unrelated to fees and notified the plans of needed corrective actions. In fiscal year 2006, seven of Labor’s regional offices had ongoing enforcement projects related to fees, but none were exclusive to 401(k) plans. We spoke with four offices about their specific projects, the reasons for their initiation, and their findings to date. According to agency officials, most of the investigations under these projects were initiated due to allegations related to defined benefit plans. The projects focused on specific areas, such as bank trust department investigations, settlor fees, or intermediary investment fees and practices. Labor has launched a nationwide campaign to improve workers’ health and retirement security by educating employers and service providers about their fiduciary responsibilities under ERISA. Its fiduciary education program includes nationwide educational seminars with fees among the topics covered. Labor’s campaign also includes several educational publications on topics such as understanding fees and selecting service providers. For example, one of the publications, Understanding Retirement Plan Fees and Expenses, is designed to help plan sponsors better understand and evaluate their plans’ fees and expenses. Another, A Look at 401(k) Plan Fees for Employees, highlights the most common fees that may be paid by plans and is geared toward plan participants. The information reported to Labor on the Form 5500 has limited use for effectively overseeing fees paid by 401(k) plans because it does not explicitly list all of the fees paid from plan assets. For example, plan sponsors are not required to report mutual fund investment fees to Labor, even though they receive this information for each of the mutual funds they offer in the 401(k) plan in the form of a prospectus. In addition to disclosing this information to sponsors of 401(k) plans, mutual fund companies are required to file this information with the SEC, which regulates mutual funds. While prospectuses are provided to SEC on a fund-by-fund basis, neither SEC nor Labor have readily available information to be able to link individual fund information to the various 401(k) plans to which the funds may be offered as investment options. Furthermore, prospectuses provide fees as an expense ratio, which is deducted from plan assets when investment returns are calculated, and as such are not explicitly stated. Without information on all of the fees charged directly or indirectly to 401(k) plans, Labor is limited in its ability to identify fees that may be questionable. Industry experts told us that additional information could be reported on the Form 5500 to give Labor a more precise idea of the cost of administering a defined contribution plan and 401(k) plan fees. The ERISA Advisory Council Working Group reported that the Form 5500, as currently structured, does not reflect the way that the defined contribution plan fee structure works, because only those fees that are billed explicitly and are paid from plan assets are deemed reportable. Many of the fees are associated with the individual investment options in the 401(k) plan, such as a mutual fund; they are deducted from investment returns and not reported to plan sponsors or on the Form 5500. The Advisory Council concluded that Form 5500s filed by defined contribution plans are of little use to policy makers, government enforcement personnel, plan sponsors, and participants in terms of understanding the cost of a plan. The Advisory Council recommended that Labor modify the Form 5500 and the accompanying schedules so that all fees incurred either directly or indirectly by these plans can be reported or estimated. This information then could be used to compare fees for research or regulatory purposes. Many opportunities exist for business arrangements to go undisclosed, given the various parties involved in today’s 401(k) arena. Problems may occur when pension consultants or other companies providing services to a plan also receive compensation from other service providers. Without disclosing these arrangements, service providers may be steering plan sponsors toward investment products or services that may not be in the best interest of participants. In addition, plan sponsors, being unaware, are often unable to report information about these arrangements to Labor on Form 5500 Schedule C. SEC recently identified certain undisclosed arrangements in the business practices of pension consultants that the agency referred to as conflicts of interest. Plan sponsors pay pension consultants to give them advice on matters such as selecting investment options for the plan and monitoring their performance and selecting other service providers, such as custodians, administrators, and broker-dealers. The SEC released a report in May 2005 that raised questions about whether some pension consultants are fully disclosing potential conflicts of interest that may affect the objectivity of the advice. For example, the report revealed that more than half of the pension consultants examined had compensation arrangements with brokers who sell mutual funds. The report highlighted concerns that these arrangements may provide incentives for pension consultants to recommend certain mutual funds to a 401(k) plan sponsor and create conflicts of interest that are not adequately disclosed to plan sponsors. Plan sponsors may not be aware of these arrangements and thus could select mutual funds recommended by the pension consultant over lower- cost alternatives. As a result, participants may have more limited investment options and may pay higher fees for these options than they otherwise would. In addition, specific fees that are considered to be “hidden” may mask the existence of a conflict of interest. Hidden fees are usually related to business arrangements where one service provider to a 401(k) plan pays a third-party provider for services, such as record keeping, but does not disclose this compensation to the plan sponsor. For example, a mutual fund normally provides record-keeping services for its retail investors, i.e., those who invest outside of a 401(k) plan. The same mutual fund, when associated with a plan, might compensate the plan’s record keeper for performing the services that it would otherwise perform, such as maintaining individual participants’ account records and consolidating their requests to buy or sell shares. The problem with hidden fees is not how much is being paid to the service provider, but with knowing what entity is receiving the compensation and whether or not the compensation fairly represents the value of the service being rendered. Labor’s position is that plan sponsors must know about these fees in order to fulfill their fiduciary responsibilities. However, if the plan sponsors do not know that a third party is receiving these fees, they cannot monitor them, evaluate the worthiness of the compensation in view of services rendered, and take action as needed. Labor officials told us about three initiatives currently underway to improve the disclosure of fee information by plan sponsors to participants and to avoid conflicts of interest. For one initiative, Labor is considering the development of a proposed rule regarding the fee information required to be furnished to participants under its section 404(c) regulation. According to Labor officials, they are attempting to define the critical information on fees that plan sponsors should disclose to participants and the best way to do so. They are deliberating on what fee information should be provided to participants and in what format to enable participants to easily compare the fees across the plan’s various investment options. The second initiative proposes changes to the Form 5500 Schedule A and instructions to improve the disclosure of insurance fees and commissions and identify insurers who fail to supply information to plan sponsors. According to a 2004 ERISA Advisory Council Report, many employers have difficulty obtaining timely Schedule A information from insurers. Labor proposes to add a checkbox to the form to permit plan sponsors to identify situations in which the insurance company has failed to provide Schedule A information. The form would also have space to indicate the type of information that was not provided. Because plan sponsors must submit a separate Schedule A for each insurance contract, Labor would be able to identify which insurance companies are failing to satisfy their disclosure obligations under ERISA and its regulations. The second initiative also proposes changes requiring plan sponsors to report additional information on fees on Schedule C of Form 5500. Consistent with recommendations made by the ERISA Advisory Council Working Groups and GAO, Labor issued a proposed rule on July 21, 2006, to revise the Schedule C and accompanying instructions to clarify that the plan sponsor must report any direct and indirect compensation (i.e., money or anything else of value) it pays to a service provider during the plan year. Also, a new section would be added requiring that the source and nature of compensation in excess of $1,000 received from parties other than the plan or the plan sponsor, such as record-keepers, be disclosed for certain key service providers, including, among others, investment managers, consultants, brokers, and trustees as well as all other fiduciaries. Labor officials told us that the revision aims to improve the information plan sponsors receive from service providers. The officials acknowledge, however, that this requirement may be difficult for plan sponsors to fulfill without an explicit requirement in ERISA for service providers to give plan sponsors information on the fees they pay to other providers. The third initiative involves amending Labor’s regulations under section 408(b)(2) of ERISA to define the information plan sponsors need in deciding whether to select or retain a service provider. According to Labor, plan sponsors need information to assess the reasonableness of the fees being paid by the plan for services rendered and to assess potential conflicts of interest that might affect the objectivity with which the service provider provides its services to the plan. The proposed change to the regulation is intended to make clear what plan sponsors need to know and, accordingly, what service providers need to provide to plan sponsors. In addition to these three initiatives, Labor has a model fee disclosure worksheet available on its Web site. The worksheet was developed to help plan sponsors analyze and compare fees during their negotiations with service providers. Labor worked with several industry groups to develop this worksheet as a result of the problems it identified during its 1997 investigations on 401(k) fees. Labor officials told us that this worksheet or a similar tool could be used to facilitate plan sponsors attainment and analysis of fee information and that plan service providers could also use the worksheet to disclose their fee arrangements when soliciting potential clients. As American workers take increasing responsibility for the adequacy of their retirement savings through 401(k) plans, they need to be more aware of the fees that they pay. ERISA does not explicitly require plan sponsors to disclose comprehensive information on fees to participants, yet even small fees can significantly affect retirement savings over the course of a career. Information about investment options’ historical performance is useful, but alone does not provide participants with enough information for making informed investment decisions. Giving participants key information on fees for each of the plan’s investment options in a simple format—including fees, historical performance, and risk—will help participants make informed investment decisions within their 401(k) plan. In choosing between investment options with similar performance and risk profiles but different fee structures, the additional provision of expense ratio data may help participants build their retirement savings over time by avoiding investments with relatively high fees. Amending ERISA and updating regulations to better reflect the impact of fees and undisclosed business arrangements among service providers will help Labor provide more effective oversight of 401(k) plan fees. Without such changes, Labor will continue to lack comprehensive information on all fees being charged directly or indirectly to 401(k) plans. In addition, some conflicts of interest that affect the fees that participants pay may continue to go unnoticed because service providers are not required to inform plan sponsors of the compensation they receive from other service providers. As a result, Labor may not be able to identify instances in which service providers might be steering plan sponsors to overpriced investment options or services that are not in the best interest of plan participants. Further, requiring plan sponsors to report more complete information to Labor on fees—those paid out of plan assets or by participants—would put the agency in a better position to effectively oversee 401(k) plans and, in doing so, to protect an increasing number of participants. To ensure that participants have a tool to make informed comparisons and decisions among plan investment options, Congress should consider amending ERISA to require all sponsors of participant-directed plans to disclose fee information of 401(k) investment options to participants in a way that facilitates comparison among the options. This information could be provided via expense ratios and be communicated annually in a single document alongside other key information about the investment options such as historical performance and risk. Providing such a disclosure to participants who are responsible for directing their 401(k) investments would ensure that they have another important tool to make informed comparisons and investment decisions among the plan’s options. To allow plan sponsors, and ultimately Labor, to provide better oversight of fees and certain business arrangements among service providers, Congress should consider amending ERISA to explicitly require that 401(k) service providers disclose to plan sponsors the compensation that providers receive from other service providers. To better enable the agency to effectively oversee 401(k) plan fees, the Secretary of Labor should require plan sponsors to report a summary of all fees that are paid out of plan assets or by participants. This summary should list fees by type, particularly investment fees that are being indirectly incurred by participants. We provided a draft of this report to Labor and SEC. Labor provided written comments, which appear in appendix II. Labor’s comments generally agree with the findings and conclusions of our report. Specifically, Labor stated that it will give careful consideration to GAO’s recommendation that plans be required to provide a summary of all fees that are paid out of plan assets or by participants. Labor and SEC also provided technical comments on the draft, which we incorporated as appropriate. Labor described a number of details related to its ongoing regulatory initiatives and outreach activities, many of which we also cited in the section of our report on Labor’s authority and oversight. In its written comments, Labor also suggested an additional technical change on legal fees for plan design, which we have made to the final report. Unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to the Secretary of Labor, the Chairman of the SEC; appropriate congressional committees; and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http//:www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or bovbjergb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who have made major contributions to this report are listed in appendix III. To identify the major fees associated with 401(k) plans and how they are charged to plan sponsors and plan participants, we interviewed officials from Labor, the Federal Deposit Insurance Corporation (FDIC), the Federal Reserve Board (FRB), the Securities and Exchange Commission (SEC), and the Treasury Department’s Office of the Comptroller of the Currency (OCC); met with service providers and other industry professionals; and collected information about the range of fees and how they are charged to plan sponsors and participants. We also reviewed several major 2005 industry surveys of 401(k) sponsors including surveys by HR Investment Consultants, the Profit Sharing and 401(k) Council of America (PSCA), and Hewitt Associates. Since the survey response rates are low, the data may not be generalizable. To assess reliability of the survey data, we contacted the authors of each survey and collected information on the methodology that was used to complete it. The results of HR Investment Consultants’ survey are based on responses from 125 vendors that service 401(k) plans. This response represents about 85 percent of the assets invested in 401(k) plans. Company officials said they survey vendors because they are generally more knowledgeable than employers about plan fees. The survey was e-mailed to respondents as an attachment in 2004 and the results were updated in 2005. The authors were not able to provide a response rate, but said the survey was completed by most respondents. HR Investment Consultants provides a range of services to employers offering participant-directed retirement plans. PSCA’s survey results are based on responses from 1,106 plan sponsors that have profit-sharing plans, 401(k) plans, or a combination of both and represent 1 to 5,000-plus employees. The survey was mailed or faxed to respondents and conducted from March 2006 to May 2006. The survey provides a snapshot as of the end of 2005. The survey response rate was 21 percent. PSCA officials were able to provide us with data that excluded profit- sharing plans only. PSCA is a national, nonprofit association of 1,200 companies and their 6 million plan participants. According to PSCA, it represents the interests of its members to federal policy makers and offers assistance with profit sharing and 401(k) plan design, administration, investment, compliance, and communication. Hewitt Associates’ survey results are based on responses from 458 employers with 1,000 employees or more. Nineteen percent of the respondents represented Fortune 500 companies. The survey was conducted from mid-March through April 2005. The survey and a link to a Web site were e-mailed to respondents whose email addresses were available so they could complete the survey on the Web or on paper. The other surveys were mailed with a stamped and addressed enveloped. The survey had a 9 percent response rate. Hewitt Associates is a human resource outsourcing and consulting firm. To assess how fees are disclosed to plan participants, we reviewed relevant laws and regulations, spoke with agency and industry officials, and reviewed sample disclosure documents. To understand requirements to disclose information about fees to participants, we reviewed ERISA and relevant regulation sections such as 404(c) and spoke with agency officials described above. To identify the content and frequency of fee-related disclosures typically made to 401(k) plan participants, we spoke with plan practitioners and reviewed documents including sample disclosure documents. Specifically, we interviewed an array of service providers that serve plans of varying sizes including members of the American Bankers Association, American Benefits Council, American Council of Life Insurers, American Society of Pension Professionals & Actuaries, Securities Industry Association, Society of Professional Administrators and Record Keepers, as well as several plan consultants. We also interviewed officials from three plan sponsors. However, because we could not readily obtain a representative sample of service providers or plan sponsors, the information obtained does not represent the views of all service providers or plan sponsors. In addition, we reviewed a limited number of sample disclosure documents made to participants. Because the documents do not reflect a representative sample, we supplemented the information with Labor documents, including those from the 2004 ERISA Advisory Council’s Working Group on Fee and Related Disclosures to Participants, to determine the type of disclosures typically made to participants. To understand participants’ awareness of fees and related disclosures, we spoke with the American Association of Retired Persons (AARP) in addition to the agency and other industry professionals listed above. We also reviewed AARP’s nationwide survey of plan participants regarding plan fees and assessed its methodology. The survey reached plan participants aged 25 or older during November and December 2003. The total sample of over 1,200 respondents was stratified by age and geographic region and then weighted by age, region, and gender to create a representative sample of the total population. To assess Labor’s role in overseeing plan fees and certain types of business arrangements, we reviewed Labor’s and other agencies’ legal and regulatory authority and Labor’s procedures for assuring that plans meet overall legal requirements. We reviewed the information required to be reported on the Form 5500, and several reports produced by federal agencies, trade associations, participant groups, and industry experts, regarding retirement plan fees and business arrangements among service providers. In addition to interviewing Labor officials in the national office about the their enforcement and outreach efforts, we also interviewed officials from Labor’s regional offices located in Atlanta, Georgia; Philadelphia, Pennsylvania; San Francisco, California; and Chicago, Illinois about their ongoing enforcement projects related to fees. We spoke with officials from the four offices about their specific projects, the reasons for their initiation, and the findings to date. Finally, we inquired about Labor’s past initiative specific to 401(k) fees and reviewed Labor’s current initiatives related to 401(k) plans. In addition to the contact named above, Tamara Cross, Assistant Director, Daniel Alspaugh, Monika Gomez, Joel Green, Susan Pachikara, Dayna Shah, Roger Thomas, Rachael Valliere, and Walter Vance made important contributions to this report.
American workers are increasingly relying on 401(k) plans, which allow pre-tax contributions to individual accounts, for their retirement income. As workers accrue earnings on their investments, they also pay a number of fees that may significantly decrease their retirement savings. Because of concerns about the effects of fees on participants' retirement savings, GAO examined (1) the types of fees associated with 401(k) plans and who pays these fees, (2) how information on fees is disclosed to plan participants, and (3) how the Department of Labor (Labor) oversees plan fees and certain business arrangements. GAO reviewed industry surveys on fees and interviewed Labor officials and pension professionals about disclosure and reporting practices. Investment fees, which are charged by companies managing mutual funds and other investment products for all services related to operating the fund, comprise the majority of fees in 401(k) plans and are typically borne by participants. Plan record-keeping fees generally account for the next largest portion of plan fees. These fees cover the cost of various administrative activities carried out to maintain participant accounts. Although plan sponsors often pay for record-keeping fees, participants bear them in a growing number of plans. The information on fees that 401(k) plan sponsors are required by law to disclose is limited and does not provide for an easy comparison among investment options. The Employee Retirement Income Security Act of 1974 (ERISA) requires that plan sponsors provide participants with certain disclosure documents, but these documents are not required to contain information on fees borne by individual participants. Additional fee disclosures are required for certain--but not all--plans in which participants direct their investments. These disclosures are provided to participants in a piecemeal fashion and do not provide a simple way for participants to compare plan investment options and their fees. Labor has authority under ERISA to oversee 401(k) plan fees and certain types of business arrangements that could affect fees, but lacks the information it needs to provide effective oversight. Labor collects information on fees from plan sponsors, investigates participants' complaints or referrals from other agencies on questionable 401(k) plan practices, and conducts outreach to educate plan sponsors about their responsibilities. However, the information reported to Labor does not include all fees charged to 401(k) plans and therefore has limited use for effective oversight and for identifying undisclosed business arrangements among service providers. Without disclosing these arrangements, service providers may steer plan sponsors toward investment products or services that may not be in the best interest of participants and may cause them to pay higher fees. Labor has several initiatives underway to improve the information it has on fees and the various business arrangements among service providers.
In the past, DOD, USAID, and State were the federal agencies primarily responsible for national security. Over the past decade, however, events such as 9/11 and the ongoing operations in Iraq and Afghanistan have underscored the need for a broader and more integrated national security effort. One of the first structural changes Congress made to address this need was to integrate 22 separate agencies with domestic national security responsibilities to create DHS. Today, greater emphasis is being placed on identifying whole-of-government approaches to developing national security policies and carrying out operations. Such an approach emphasizes the contributions of agencies not traditionally associated with national security. For example, Commerce plays a role in monitoring exports of technology to make sure that sensitive items with military uses do not fall into the hands of our enemies. In light of the challenges that the U.S. government continues to experience in its efforts to coordinate the actions of the agencies involved—whether it be preventing a terrorist attack or overseeing reconstruction and stabilization efforts in Iraq and Afghanistan—there is an ongoing policy debate on how to enhance and sustain interagency collaborative efforts. Among the range of proposals for reform, there is a growing consensus that the government’s professional development efforts could contribute to more effective interagency collaboration, which is seen as key to U.S. national security. Specifically, a number of reports—such as the Project on National Security Reform’s Forging a New Shield and the 2006 Quadrennial Defense Review, written by experts working in the national security field—recommended establishing a cadre of national security specialists from all relevant departments and agencies, and placing them in a long-term career development program designed to provide them with a better understanding of the processes and cultures of other agencies. Proponents contend that such a program would help the U.S. government prepare personnel with national security responsibilities to plan, execute, and lead national security missions. More recently, in September 2010, Congressmen Ike Skelton and Geoff Davis introduced the Interagency National Security Professional Education, Administration, and Development Systems Act of 2010 which seeks to create a system to educate, train, and develop interagency national security professionals across the government. Agencies have historically defined their own professional development activities for their national security personnel. In 2007, however, the Bush Administration launched the National Security Professional Development (NSPD) initiative to integrate professional development activities for national security personnel as part of a larger effort to enhance interagency collaboration. Executive Order 13434, May 17, 2007, entitled National Security Professional Development, required the heads of all agencies with national security responsibilities to identify or enhance current professional development activities for their national security personnel. In addition, the order established an Executive Steering Committee composed of 15 agency Secretaries or Directors (or their designees) to facilitate implementation of the National Strategy for Professional Development. To coordinate NSPD-related activities among agencies, the Executive Steering Committee established the NSPD Integration Office, which created an online repository of information on available training and other professional development activities for national security professionals. Recently, two studies have been launched to reexamine NSPD and to take a more comprehensive look at the skills, education, training, and professional experiences that interagency national security professionals need at various career stages. While awaiting the results of these studies, the NSPD executive staff is reviewing issues related to the scope and definition of national security professionals and revising the NSPD strategy and implementation plan. Several agencies reported putting implementation of their NSPD-related training and professional development activities on hold pending the results of these reviews, or other direction from the administration. In addition, the online repository of information is no longer available. We identified 225 professional development activities intended to improve participants’ abilities to collaborate across organizational lines. These ranged from 10-month joint professional military education programs and year-long rotations to 30-minute online courses. Because these activities varied so widely across dimensions such as length and learning mode, we grouped them in a way that would allow us to analyze their characteristics and make appropriate comparisons. These five general groups included training courses and programs, training exercises, interagency rotational programs, Joint Professional Military Education (JPME), and leadership development programs. We provide further description of these groups in figure 1. Additionally, six of the eight agencies represented on the Executive Steering Committee established by the executive order—DOD, DHS, Justice, Commerce, State, and DOE—identified training related to the National Security Professional Development (NSPD) initiative. We categorized NSPD separately because, although the developmental activities created under its auspices to date have included mostly online training courses, when fully implemented, NSPD was intended to include a range of activities from training courses to interagency assignments, fellowships, and exchanges. NSPD was intended to play a critical role in informing national security professional development activities, and as such, is included in our review in addition to the five groups listed above. Overall, we found that DOD, State, and DHS provided most of the professional development activities that met our criteria. We found some variation within the different types of activities, mostly related to provider, mode of delivery, or participation levels. DHS, DOD, and State provided the majority of training activities, which primarily consisted of short-term, online, or classroom courses. DOD provided most of the exercise programs and all of the JPME programs. DOD and State provided the majority of interagency rotational programs and all of the leadership development programs that met our criteria. Each of the other agencies we reviewed provided at least one relevant professional development activity. All of the agencies we reviewed reported sending personnel to participate in one or more activities in fiscal year 2009. Among the activities for which agencies provided participation data, we found that short-term, online training tended to have the highest participation levels. Participation levels associated with longer-term activities—such as interagency rotational programs—were much lower. Figure 1 below summarizes these and other findings and provides more detailed descriptions of our six activity groups. This page left blank intentionally. Planned learning for acquiring and retaining skills, knowledge, and attitudes. In our review, most were online courses provided by DHS’s EMI or DOD’s Joint Forces Command, or classroom courses provided by State’s FSI. Scenario-based training that allows for the development, improvement, or display of specific capabilities or skills. In our review, most were DOD joint military exercises. Work assignments at a different agency from the one in which the participant is normally employed, with an explicit professional development purpose. In our review, most involved sending personnel between civilian agencies and the military. ■ Most exercises intended to bring participants together to practice working collaboratively within a range of national security-related scenarios. ■ Fiscal year 2009 participation: 240 ■ Most courses provided a common framework for understanding national security topics or information on how to work with an agency with national security responsibilities. ■ Most rotations provided participants opportunities to learn about organizational culture and build networks among partner national security agencies. Photo: Bureau of Alcohol, Tobacco, Firearms and Explosives. Photo: State. In or review, the mjority of trining was throgh DHS Emergency Mgement Intitte’s online co on integrted ntionl emergency repone topic (fr left) or throgh Ste Foreign Service Intitte’s classroom co (not pictred). A few other gencie nd orgniztion o provided trining co on pecilized topic. For exmple, DOJ provided coe for lw enforcement offici on condcting pot-ast invetigtion (fr right), nd Ste’s Office of the Coordintor for Recontrction nd Sabiliztion provided coe thdevelop the kill plnner need to condct intergency conflict assssment in the field (center). According to our analysis, DHS, DOD, and State provided the majority of the 101 short-term training courses that met our criteria. Over half of these courses were provided in a classroom setting; most of the other courses were provided as online independent study courses, and several courses either mixed or offered a choice between the two modes. State’s Foreign Service Institute (FSI) provided most of the 52 classroom directed-study courses, which typically lasted several days or longer and covered the range of policy issues that State addresses, such as post-conflict reconstruction and stabilization and commercial and trade activity. DHS’s Emergency Management Institute (EMI) and DOD—through its Joint Knowledge Online system—provided the 43 online courses, most of which lasted less than 3 hours. These online courses covered topics ranging from the National Response Framework, which is a framework for how agencies collaborate on national preparedness planning efforts, to the roles and responsibilities of different agencies involved in interagency planning efforts such as Joint Interagency Coordination Groups.16, 17 DOD’s Information Resources Management College at the National Defense University (NDU) provided the six courses that mixed classroom and online learning or offered participants a choice between the two modes. These courses, such as Multiagency Collaboration and Enterprise Strategic Planning, covered organizational management topics in the context of national security and interagency collaboration, and could be taken either in a 10- to 12-week online format or as a 5-day classroom seminar on the NDU campus. Some of the courses targeted participants of certain career levels or with certain areas of responsibility. For example, EMI’s introductory national response framework course targeted executive-level personnel from government and other organizations with responsibilities for emergency response. Other courses, such as FSI’s Foundations of Interagency Reconstruction and Stabilization Operations course, did not target a specific employee level or rank but were open to anyone preparing to deploy to Afghanistan, Iraq, or other conflict-prone countries. The National Response Framework presents the guiding principles that enable all response partners to prepare for and provide a unified national response to disasters and emergencies—from the smallest incident to the largest catastrophe. Joint Interagency Coordination Groups, housed within DOD combatant commands, are intended to serve as a coordinating body among the civilian agencies in Washington, D.C., the country ambassadors, the combatant command’s staff, and other multinational and multilateral bodies within the region. The vast majority of participation in short-term training courses—95 percent—was associated with DHS online courses offered through EMI, which is housed within the Federal Emergency Management Agency (FEMA). EMI tracks participation in two categories: (1) FEMA, and (2) all other entities, including participants from other DHS agencies. Therefore, we could not determine how many participants were from DHS and how many came from other agencies. State and most of the other agencies providing classroom courses did track interagency participation. Data show that interagency participation varied widely; some courses had none at all, while others featured a mix of participants from various agencies. For most courses, interagency participation was less than 15 percent. See table 1 for additional information on training courses we identified. In addition to individual training courses, there were also three long-term programs associated with advanced degrees that met our criteria. The College of International Security Affairs at DOD’s NDU provides a part- time certificate or 10-month full-time master’s program that teaches students how to develop and implement whole-of-government national and international security strategies for conditions of peace, crisis, and war. The Interamerican Defense College provides an 11-month Advanced Course on Hemispheric Security and Defense. Although the majority of the participants are from other countries, State and DOD also send personnel to this program, and one of its stated objectives is to foster connections among participants. Finally, DOD’s Naval Postgraduate School provides graduate programs ranging from month-long courses to multiyear master’s and doctoral programs that focus on various aspects of the defense and national security arenas within an interagency and intergovernmental context. According to officials in DOD’s Office of the Under Secretary of Defense for Personnel and Readiness (OUSD-Readiness), in fiscal year 2009, the military services or combatant commands led an estimated 84 joint- military exercise programs that addressed a range of national security matters and sought to improve the ability of participants to work across agency lines by encouraging interagency participation. In addition, First Army, which is responsible for U.S. Army Reserve and Army National Guard training, led an exercise program for military and interagency civilian personnel preparing to deploy to Afghanistan provincial reconstruction teams. DOD’s Center for Applied Strategic Learning at NDU also provided an exercise program for mid- and senior-level federal personnel and members of Congress, which included crisis simulations in a range of national security areas such as the Horn of Africa, international water rights, and space policy. During fiscal year 2009, there were also four exercise programs provided by civilian agencies, including State, USDA, and DHS’s FEMA, which is responsible for coordinating the National Exercise Program (NEP). Officials from DHS Headquarters and FEMA said that FEMA had conducted five NEP exercises in fiscal year 2009, including one national- level exercise and four principle-level exercises, which targeted senior officials. They also said that although FEMA does not track information on all levels of NEP exercises, up to three more federal strategy or policy- focused exercises are required annually, and there may have been many more conducted regionally throughout the country during fiscal year 2009. Some of the exercises, such as those conducted by the Center for Applied Strategic Learning, targeted mid- and senior-level leadership of federal agencies and other organizations. However, most of the exercise programs did not specify a rank or career level for their target participant population. See table 3 for more information on the subject matter and number of military and civilian-agency-led exercises. DOD OUSD-Readiness officials identified 84 exercise programs which reported 212 individual joint-military exercises during fiscal year 2009. Although the joint-military exercises were not necessarily created to facilitate interagency collaboration, officials from both OUSD-Readiness and the Joint Forces Command acknowledged the importance of such interagency participation. They recognized that shared training experiences strengthen the collaborative partnerships between the military and civilian interagency communities by making the exercises more realistic and establishing interagency networks among participants. Joint Forces Command has taken steps to increase interagency participation, creating a “Partnership Opportunities Catalog” of joint exercises open to interagency and other partners. It has also begun to collect and assist with requests for interagency participation from military services and combatant commands looking for participants from specific agencies or other partner organizations. According to DOD, in fiscal year 2009, about 50 percent of the exercise programs—43 of 84—had some interagency participation. However, because DOD included state and local personnel in its definition of interagency participation, it is possible that there were fewer exercise programs with interagency participation as it is defined in this report. Also, even though participation data were not systematically tracked for the two First Army-led fiscal year 2009 Afghanistan provincial reconstruction team predeployment exercises, an Army official estimated that approximately 2,500 military personnel participated. In addition, a USAID official estimated that approximately 40 civilians from USAID and other agencies, such as State and USDA, participated in the interagency modules of these exercises during the same time period. An official at NDU’s Center for Applied Strategic Learning reported that in fiscal year 2009, through its Strategic Policy Forum and its other policy- related exercises, the Center provided 18 crises simulations for personnel from a range of agencies. Examples of participating agencies include DOD, State, DHS, DOJ, USAID, National Aeronautics and Space Administration, and others. Both State and USDA provided participation data on their fiscal year 2009 exercises, which reported 24 participants for one State-led exercise and 170 for the USDA exercise. While DHS’s FEMA did not provide NEP data that differentiated between federal, state, local, and other participants, one FEMA official estimated that approximately 2,500 personnel from more than 230 organizations participated in the 2009 national-level exercise. We identified seven interagency rotational assignment programs that supported participating agencies’ efforts to achieve their missions while explicitly seeking to develop participants’ abilities to collaborate on national security. Five of these rotational assignment programs involved sending personnel between civilian agencies and the Pentagon or military learning institutions. For example, State’s Foreign Policy Advisors program places Foreign Service Officers in the Pentagon and military commands worldwide as personal advisors to senior military commanders. State participants work alongside DOD civilians and officers on a range of national security issues such as international relations and diplomatic practices. The other two programs involved sending civilian agency personnel to other federal agencies or executive offices. Only one of the rotation programs is open to all levels of personnel. The other programs target personnel at specific ranks or career levels. The three State-sponsored programs target mid- and senior-level personnel, while the DOD-sponsored programs are intended for junior or mid-level personnel and are associated with educational programs. For example, the Military Academic Collaborations program sends select midshipmen, cadets, and some instructors from military officer development programs such as the Academies and university Reserve Officers’ Training Corps programs to DOE NNSA laboratories for summer internships. Table 4 describes selected characteristics of each rotational program. One of the programs, the U.S. Army Command and General Staff College’s Interagency Fellows Program, went beyond the scope of a typical rotation, temporarily assigning military personnel to a civilian agency to enable civilian personnel from that agency to attend a long-term JPME program. See figure 2 for more information on that program. Officials from DOD, USAID, State, and other agencies say that the disparity in the size of the military and civilian workforces can make it difficult for civilian agencies to “keep up” with the military in participating in longer-term training programs. In particular, officials point out that military staffing levels take into account the need for extended training at standard career intervals, while civilian agency staffing levels do not. According to officials at the Army Command and General Staff College and USAID, the Interagency Fellows Program was created to help alleviate such resource limitations, while providing Army participants with valuable developmental opportunities. agencies where they work, and share with their colleagues the Army’s approach to planning and decision-making. Military and civilian officials concur that it is important to bring students together from a mix of organizations to provide a realistic whole-of-government perspective. A USAID official involved in the program said that it is not a perfect exchange, and that the agency is still learning how to most effectively make use of the Interagency Fellows. For example, she explained that there is a learning curve for people on rotation to a new agency, and that it can take a while to get them up to speed. She also said that Interagency Fellows are not always of an equivalent rank to the USAID personnel away at the College, which means they cannot always truly cover their responsibilities. However, despite these challenges, she said, she believes that the program provides a creative solution to participation barriers and will continue to improve over time. An Army official at the College agreed that it’s a “win-win situation” for the College, the agencies, and the participants. Photos: Noah Albro, Army Command and General Staff College. A dent from the Deprtment of Ste prticipte in the Intergency Fellow progrm (above). Intergency dent join militry dent in class t the Army Commnd nd GenerSff College (elow). According to our analysis, DOD provides relevant JPME programs through 13 academic institutions operated by one of the four military services or NDU. These programs must meet specific JPME curriculum requirements established for intermediate, senior, and executive-level education, which include learning objectives related to a whole-of-government approach to national security, among other objectives. Such whole-of-government approaches seek to identify or incorporate all agencies’ contributions to addressing national security challenges. The Army, Navy, Air Force, and Marines each have command and staff colleges that provide intermediate- level JPME and war colleges that provide senior-level JPME, within military service-specific graduate programs. NDU has three colleges—the Industrial College of the Armed Forces, the National War College, and the Joint Forces Staff College—that incorporate senior-level JPME curriculum into 10-month-long master’s programs, among other offerings. NDU also administers Capstone, an executive-level JPME program that met our criteria. The program length and target participant populations varied depending on the level of education, as described in table 5. Six of the nine agencies we reviewed—DOD, DHS, State, USAID, DOJ, and DOE—said they sent personnel to one or more of DOD’s JPME programs. According to information DOD provided for academic year 2009, some programs, such as those at Air University, had few or no participants from federal agencies outside of DOD. NDU’s National War College and its Industrial College of the Armed Forces, which offered senior-level JPME programs, had the greatest number of interagency participants. See table 6 for information on participation levels at each institution. According to our analysis, DOD and State offer 11 leadership development programs that include a focus on interagency collaboration in the national security arena. Several programs include participation in other activities described elsewhere in this report, such as JPME or interagency rotations. For example, the Defense Senior Leadership Development Program combines specialized courses with attendance in a 10-month JPME program and a short-term rotation, as indicated by the participant’s individual development plan, to help participants gain the competencies needed to lead people and programs and achieve national security goals in joint, interagency, and multinational environments. These programs varied in length, mode of delivery, target population, and interagency participation. The length of time and mode of delivery of these courses ranged from 1 day of classroom training to 14 weeks of in-resident training to a series of courses and seminars to be completed over a 3-year period. Most of the programs targeted personnel at GS-12 or above, because, according to officials at several agencies, these employees had the experience needed to benefit from and contribute to training and development programs with an interagency focus. For more information about the target population and interagency participation of these programs, see table 7. Of the 11 reported programs, 6 leadership development programs were open to and encouraged interagency participation. Two of these 6—the Program for Emerging Leaders at NDU’s Center for the Study of Weapons of Mass Destruction, and State’s National Security Executive Leadership Seminar—intended to create an interagency cohort of leaders who can work together seamlessly on national security issues. For example, to promote a professional network among future U.S. government leaders in the field of weapons of mass destruction, NDU’s Center for the Study of Weapons of Mass Destruction offered Program for Emerging Leaders students a variety of ways to connect outside the classroom, such as a members-only Web site for online dialogue, school-sponsored social events, and off-campus site visits. Five of the 11 programs were closed to participation from other agencies. For example, the Ambassadorial Seminar offered by State’s FSI only prepares ambassadors-designate for their unique positions of leadership at the head of an embassy, which requires extensive collaboration with personnel from multiple agencies and other organizations. Six of the eight agencies represented on the Executive Steering Committee—DOD, DHS, State, Justice, Commerce, and DOE—reported making NSPD-related training available to their personnel with national security responsibilities. DOD and DHS reported developing some training specifically for their NSPD programs, which consisted primarily of online courses on key national security policies and procedures. Some agencies, however, directed their national security personnel to take existing training, such as EMI’s various online courses on national emergency response topics. Other agencies augmented existing training with NSPD-specific materials. Several of the existing courses that agencies used or modified under the auspices of NSPD were included in previous sections of this report. Officials from Commerce and DOE reported that in addition to taking advantage of existing courses, they also sent their personnel to attend in- person orientation sessions or seminars, where they had the opportunity to network with personnel from other agencies. According to officials at most of these agencies, although they have continued to work on planning and implementation efforts, much of the actual training activity has slowed or stopped altogether since fiscal year 2008. As mentioned previously, many of these agencies have put implementation of their NSPD-related training and professional development activities on hold pending the results of executive-level review of this governmentwide initiative. Based on our analysis, the relevant professional development activities were intended to improve the ability of national security personnel to collaborate across agency lines by focusing on three general approaches: providing foundational knowledge, developing skills, and providing networking opportunities. We found that the activities included one or more of these approaches to improving their participants’ abilities to collaborate: Building common foundational knowledge of the national security arena. Some of the activities establish a common foundation of shared knowledge for understanding partner agencies’ roles, responsibilities, authorities or capabilities, or specific national security subject matters. According to agency officials, such training can help reinforce a common vocabulary or framework for understanding complex policy issues. This is important for allowing personnel who may normally approach national security issues from sometimes disparate diplomatic, defense, commercial, or law enforcement perspectives to employ a whole-of-government approach to national security. For example, DHS offers an introductory online course on the National Incident Management System, which is available to personnel across federal, state, and local government and provides an overview of the roles and responsibilities of various agencies and how they are supposed to work together in different emergency situations such as responding to terrorist attacks and other national security-related incidents. Developing skills for interagency collaboration on national security. Some of the activities agencies identified build specific skills needed for interagency collaboration, such as how to plan, lead, and execute interagency efforts. For example, the Whole-of-Government Planning for Reconstruction and Stabilization course, offered by NDU in cooperation with State, teaches skills to coordinate, facilitate, or participate in the planning process for reconstruction and stabilization operations. These skills include the ability to work effectively with federal agency and other partners involved in whole-of-government planning. Establishing networks across national security agency lines. Some of the activities were explicitly designed to facilitate networks among personnel from two or more national security agencies. For example, NDU’s Capstone course for Generals, Flag Officers, and members of the civilian Senior Executive Service brings together participants from the four military services and a range of federal agencies to deepen their understanding of the whole-of-government-approach to national security, among other things. One of Capstone’s learning objectives is that participants establish a peer network for future cooperation and the program is designed to maximize peer-to-peer interaction. The way these approaches manifest themselves in the activities we reviewed varied. For example, activities that required the least time commitment, such as EMI’s online courses and NSPD online orientations, primarily provided basic foundational knowledge of a specific partner agency or national security topic. Conversely, more time-intensive activities, such as JPME and some of the leadership development programs and classroom courses that lasted several months or brought participants together on a recurring basis tended to incorporate two or more approaches to improving participants’ abilities to collaborate across agency lines. For example, a 10-month program at NDU’s College of International Security Affairs included coursework on foundational knowledge of national security issues and specific skills related to interagency planning and management, along with interagency networking events. According to human capital and training officials we interviewed at several agencies, the level of interagency participation may affect how a given professional development activity can improve its participants’ abilities to collaborate. Agency officials noted that interagency collaboration in the development and design of activities can lead to a more accurate portrayal of different agencies’ policies and processes. Moreover, agency officials said a mix of interagency participants can provide a realistic perspective of their respective agencies’ cultures, capabilities, and constraints. Greater interagency participation can also lead to the development of professional networks, and improve working relationships. Several military officials we interviewed emphasized that in order to work effectively side by side, civilian and military personnel should train together to learn how operate before they are out in the field. Several agency officials agreed, noting that even when a professional development activity is designed to build foundational knowledge, skills, or networks, lack of interagency participation can limit the extent to which this occurs. For example, as a DHS official pointed out, if only one agency participates in an exercise, there is clearly no opportunity to establish a network that could facilitate future interagency collaboration. Training, interagency rotations, exercises, and other professional development activities can help to improve participants’ abilities to collaborate in an increasingly complex national security arena. However, with national security responsibilities and associated personnel located throughout the U.S. government, it could be challenging for agency officials to identify the relevant training and professional development opportunities available to the national security community. Our review is a first step in describing the broad spectrum of professional development activities that are intended to build foundational knowledge, skills, and networks among federal national security professionals. According to agency officials who develop and oversee these professional development activities, interagency participation can be key to the activities’ success, enhancing the knowledge and skills participants acquire and the professional networks they establish. Although agencies could not provide participation data in every instance, the data we were able to obtain indicated that overall, interagency participation was lower in activities that required a longer time commitment, such as rotations and full-time joint professional military education. This raises questions about barriers to participation and other factors that may influence the success of such professional development activities, which we will explore in a subsequent review. We provided a draft report for review and comment to the Secretaries of State, Defense, DHS, the Treasury, Commerce, Agriculture, and Energy, the Administrator of USAID, and the Attorney General. State, DHS, Commerce, Energy, USDA and USAID provided technical comments which we incorporated where appropriate. DOD, DOJ, and Treasury did not provide comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. We will then send copies of this report to the Secretaries of State, Defense, Homeland Security, the Treasury, Commerce, Agriculture, and Energy, the Administrator of USAID, and the Attorney General, and other congressional committees interested in improving collaboration among agencies involved in national security issues. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-6543 or steinhardtb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objectives of our review were to identify: training and other professional development activities intended to improve the abilities of personnel from key agencies involved in national security issues to collaborate across organizational lines and how these activities were intended to improve participants’ collaboration abilities. To address our objectives, we first reviewed our prior work and other literature and interviewed experts on workforce development, education, national security, organizational culture, and collaboration to define the types of activities relevant to our topic. We then selected key agencies involved in national security issues—the Department of Defense (DOD), the Department of State (State), the U.S. Agency for International Development (USAID), the Department of Homeland Security (DHS), the Department of Treasury (Treasury), the Department of Justice (Justice), the Department of Energy (Energy), the U.S. Department of Agriculture (USDA), and the Department of Commerce (Commerce)—based on a review of our prior work and other literature and interviews with subject- matter experts. We excluded the Office of the Director of National Intelligence and its member agencies because it overlapped with similar work we have underway. In order to identify and obtain key information on national security collaboration-related professional development activities, we undertook extensive data collection efforts involving both formal data collection instruments and intensive interactions with the agencies noted above. There were two main phases to this effort. In each, several steps were taken to ensure the reliability of the information obtained, including its consistency, completeness, and accuracy. In the first phase, we developed a data collection instrument (DCI) to obtain a broad list of activities potentially applicable to our review as well as a number of key general characteristics of the activities including, for example, overall goals, how the program prepares participants to collaborate across department lines, agencies involved, and general information about participation levels. We validated the DCI by conducting pretests of the instrument with points of contact (POCs) in four agencies. These pretests included in-depth probing on the clarity of instrument, the criteria for including activities in the instrument, respondent burden, and usability of the instrument spreadsheet. The GAO engagement team staff worked with their technical advisors to revise the DCI as appropriate to address issues that arose over these topics during the pretesting process. A key element of this first phase of data collection was defining the criteria to guide agency POCs in determining the appropriate professional development activities for submission. These criteria were included in the instrument itself, with instructions to the POCs to include all programs open to their staff that met all of the following four criteria: (1) The activity explicitly prepares federal civilian and/or military personnel to collaborate with personnel of other federal departments. In particular, the activity: (a) can involve personnel of other entities—such as contractors or NGOs—or can include only personnel from the POC’s department; (b) may be provided by the POC’s department or it may be provided by another organization; and (c) must prepare personnel for interagency collaboration. POCs were not to include activities that focused solely on intraagency collaboration (e.g., collaboration among DHS component agencies or among other services within DOD). This criterion excludes programs that bring personnel of multiple agencies together for specific assignments but did not have preparation for future interagency collaboration as an explicit purpose. (2) The activity targets agency personnel involved in developing or implementing national security policy, strategy, missions, or operations, but not support functions such as administration, financial management, or procurement. (3) The activity relates to the agency’s national security activities. In particular, an activity can and should be included despite having a broader focus than interagency national security collaboration as long as it includes a component on this topic; for example, a leadership development program may have a module on interagency collaboration or provide an interagency rotation to a national security mission. (4) The activity is ongoing and sustained, not a one-time event. We identified POCs in each of the selected agencies who were to determine which activities met our criteria and complete the DCI for each. We identified the POCs during our initial conference with agency personnel and then in subsequent meetings or conversations, in which we requested the names of individuals who could work with us to identify the appropriate offices, bureaus, or functional areas that should receive our questionnaire, disseminate our questionnaire to the appropriate contacts throughout the agency, and consolidate their responses. We sent DCIs to POCs and asked them to provide the requested information for all activities that met our criteria. In addition to completing the DCIs, POCs also provided other relevant information including course manuals and evaluations. We then compiled all of the DCIs received from the nine agencies into one master file. A key element of this effort was to eliminate from the master list duplicate activities reported by POCs in multiple offices or agencies. In general, we relied on submissions from the agency we determined to be the “lead agency” for administering the activity. In some cases, however, an activity was identified by an agency that participated in, but did not provide, the activity. For example, although officials at three agencies said their personnel participated in the National Exercise Program, the two agencies chiefly responsible for organizing the program did not initially include the program in their responses. To reconcile such differences, which may have occurred because agencies have different working definitions of “national security” and “collaboration,” and different ways of understanding how these concepts might intersect, we followed up with our POCs. In some cases, the titles of activities were similar but not identical and to determine whether they were the same we contacted the relevant POCs for clarification. This process resulted in more than 350 total activities. The final step in phase 1 of our work was to review the entire list of activities identified to verify that they conformed to our four criteria. To make this assessment and to ensure its reliability, two analysts separately analyzed the list, identifying those activities that conformed and did not conform to our criteria. In cases where the analysts differed they had a third analyst review the information and then met to reconcile these differences. In cases where the data provided were ambiguous we contacted our agency POCs to obtain additional information in the form of additional interviews and/or documentation. This process reduced the number of activities in our review to 225. In the second phase, we collected more detailed information on the activities that met our criteria for inclusion, as follows: (1) The number of participants in each activity in fiscal year 2009, both from the agency that hosted the activity and from outside the agency; (2) The levels or ranks of staff targeted for participation in the activity, if any. Agencies described target populations in terms of General Schedule (GS) levels, Foreign Service (FS) levels, and/or Officer grade (O) levels. At the executive level, target populations were described as Senior Executive Service (SES), Senior Foreign Service (SFS) , Senior-Level and Scientific or Professional (SL/ST), or Generals/Flag Officers (O-7–O-10). In some cases, the equivalent levels from other federal pay schedules or personnel systems were noted; and (3) The methods of evaluation the agency might use to evaluate the effectiveness or impact of the activity. A second DCI was developed for this purpose. For each POC, we customized this data collection instrument with information about activities they had reported to us in phase 1. Like the first phase of data collection, this second phase involved close interaction with the POCs, and in some instances POCs provided information to us in forms other than the data collection instrument (e.g., published program materials, or e-mails containing the information we requested). Data collected during this phase were compiled and combined with data from the first phase to yield an overall set of data on activities that met our criteria for inclusion. We analyzed data for these activities, such as typical duration, eligibility criteria, participation rates, and participating agencies, to identify groups of activities, patterns, themes and other information. We determined these data to be reliable for the purposes of identifying and describing such activities. Upon reviewing the data the agencies provided, we found that activities varied widely across dimensions such as length and learning mode, and decided to group the activities in a way that would allow us to analyze their characteristics and make appropriate comparisons. To develop these categories of training and professional development activities, we reviewed activity data, conducted a limited literature search of GAO reports and agency guidance, and met with human resource professionals. These five general groups included training courses and programs, training exercises, interagency rotational programs, Joint Professional Military Education, and leadership development programs. After the data had been compiled, we conducted a series of follow-up interviews with POCs to gauge the completeness and accuracy of the participation data we had received. POCs were asked about the sources of counts of participants, how these counts had been stored, whether they had been checked for accuracy, and other topics relevant to verifying the reliability of these data. All of the participation data used in the present report were judged reliable for the purpose of establishing approximate levels of participation in the national security collaboration activities. As part of the data collection instrument used in phase I, we asked agency officials to describe how each activity they submitted was intended to improve the ability of national security personnel to collaborate across agency lines. We reviewed the answers they provided as well as other materials such as course descriptions and catalogues of exercises and JPME programs to identify common themes. Based on our analysis, we determined that these activities generally employed one or more of the following approaches: building foundational knowledge of the national security arena such as other agencies’ roles, responsibilities, authorities or capabilities; developing skills for interagency collaboration, such as how to plan, lead, and execute interagency efforts; or establishing networks among national security professionals. We also discussed these approaches with agency officials during our interviews, and they concurred that they were appropriate and accurate. Appendix II: Inventory of Professional Development Activities Intended to Foster Interagency Collaboration This entry for joint military exercises represents 84 individual exercise programs which conducted multiple exercises during fiscal year 2009. In 2010, USAID changed the name of this course from Tactical Conflict Assessment and Planning Framework to District Stability Framework. Elizabeth Curda and Laura Miller Craig managed this assignment. Jessica Nierenberg, Kate Hudson Walker, Albert Sim, Melanie Papasian, David Dornisch, and Russ Burnett made key contributions to all aspects of the report. Esther Toledo, Mark Kehoe, David P. Owen, Lauren Levine, Andrew Stavisky, John Mingus, Jr., John Pendleton, Marie Mak, Alissa Czyz, William Trancucci, Judith Kordahl, and Crystal Robinson also provided assistance. In addition, Lois Hanshaw and Karin Fangman provided legal support and Donna Miller developed the report’s graphics. Defense Management: DOD Needs to Determine the Future of Its Horn of Africa Task Force. GAO-10-504. Washington, D.C.: April 15, 2010. Homeland Defense: DOD Needs to Take Actions to Enhance Interagency Coordination for Its Homeland Defense and Civil Support Missions, GAO-10-364. Washington, D.C.: March 30, 2010. Interagency Collaboration: Key Issues for Congressional Oversight of National Security Strategies, Organizations, Workforce, and Information Sharing. GAO-09-904SP. Washington, D.C.: September 25, 2009. Military Training: DOD Needs a Strategic Plan and Better Inventory and Requirements Data to Guide Development of Language Skills and Regional Proficiency. GAO-09-568. Washington, D.C.: June 19, 2009. Influenza Pandemic: Continued Focus on the Nation’s Planning and Preparedness Efforts Remains Essential. GAO-09-760T. Washington, D.C.: June 3, 2009. U.S. Public Diplomacy: Key Issues for Congressional Oversight. GAO-09-679SP. Washington, D.C.: May 27, 2009. Military Operations: Actions Needed to Improve Oversight and Interagency Coordination for the Commander’s Emergency Response Program in Afghanistan. GAO-09-61. Washington, D.C.: May 18, 2009. Foreign Aid Reform: Comprehensive Strategy, Interagency Coordination, and Operational Improvements Would Bolster Current Efforts. GAO-09-192. Washington, D.C.: April 17, 2009. Iraq and Afghanistan: Security, Economic, and Governance Challenges to Rebuilding Efforts Should Be Addressed in U.S. Strategies. GAO-09-476T. Washington, D.C.: March 25, 2009. Drug Control: Better Coordination with the Department of Homeland Security and an Updated Accountability Framework Can Further Enhance DEA’s Efforts to Meet Post-9/11 Responsibilities. GAO-09-63. Washington, D.C.: March 20, 2009. Defense Management: Actions Needed to Address Stakeholder Concerns, Improve Interagency Collaboration, and Determine Full Costs Associated with the U.S. Africa Command. GAO-09-181. Washington, D.C.: February 20, 2009. Combating Terrorism: Actions Needed to Enhance Implementation of Trans-Sahara Counterterrorism Partnership. GAO-08-860. Washington, D.C.: July 31, 2008. Information Sharing: Definition of the Results to Be Achieved in Terrorism-Related Information Sharing Is Needed to Guide Implementation and Assess Progress. GAO-08-637T. Washington, D.C.: July 23, 2008. Highlights of a GAO Forum: Enhancing U.S. Partnerships in Countering Transnational Terrorism. GAO-08-887SP. Washington, D.C.: July 2008. Stabilization and Reconstruction: Actions Are Needed to Develop a Planning and Coordination Framework and Establish the Civilian Reserve Corps. GAO-08-39. Washington, D.C.: November 6, 2007. Homeland Security: Federal Efforts Are Helping to Alleviate Some Challenges Encountered by State and Local Information Fusion Centers. GAO-08-35. Washington, D.C.: October 30, 2007. Military Operations: Actions Needed to Improve DOD’s Stability Operations Approach and Enhance Interagency Planning. GAO-07-549. Washington, D.C.: May 31, 2007. Combating Terrorism: Law Enforcement Agencies Lack Directives to Assist Foreign Nations to Identify, Disrupt, and Prosecute Terrorists. GAO-07-697. Washington, D.C.: May 25, 2007. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005.
Agencies must engage in a whole-of-government approach to protect the nation and its interests from diverse threats such as terrorism and infectious diseases. However, GAO has reported that gaps in national security staff knowledge and skills pose a barrier to the interagency collaboration needed to address these threats. Training and other professional development activities could help bridge those gaps. GAO was asked to identify: (1) training and other professional development activities intended to improve the ability of key national security agencies' personnel to collaborate across organizational lines and (2) how these activities were intended to improve participants' collaboration abilities. To address these objectives, GAO asked nine key agencies involved in national security issues to submit information on professional development activities that were explicitly intended to build staff knowledge or skills for improving interagency collaboration. In addition, GAO gathered and analyzed other information such as target audience, participation levels, and participating agencies. GAO also interviewed responsible human capital and training officials. GAO will explore how interagency participation and other factors may influence the success of these activities in a subsequent review. GAO identified 225 professional development activities intended to improve participants' ability to collaborate across agency lines. These ranged from ten- month joint professional military education programs and year-long rotations to 30-minute online training courses. Because these activities varied widely across dimensions such as length and learning mode, the activities were grouped to allow for appropriate analysis and comparisons of their characteristics. Overall, we found that DOD, State, and DHS provided most of the professional development activities that met our criteria. We found some variation within the different types of activities, mostly related to provider, mode of delivery, or participation levels. DHS, DOD, and State provided the majority of training activities, which primarily consisted of short-term, online, or classroom courses. DOD provided most of the exercise programs and all of the JPME programs. DOD and State provided the majority of interagency rotational programs and all of the leadership development programs that met our criteria. Although agencies could not provide participation data in every instance, the data obtained indicated that overall, interagency participation was lower in activities that required a longer time commitment, such as rotations and full-time joint professional military education. Analysis of the activities GAO identified showed that they are intended to provide opportunities to (1) build common foundational knowledge of the national security arena; (2) develop specific skills, such as how to plan, lead, and execute interagency efforts; and (3) establish networks among personnel from national security agencies that could lead to improved interagency collaboration. According to human capital and training officials at several agencies, the level of interagency participation may affect how a given professional development activity can improve its participants' ability to collaborate. GAO does not have any recommendations in this report. Technical comments from the agencies reviewed were incorporated where appropriate.
VA manages access to services in relation to available resources through a priority system established by law. The order of priorities is generally based on service-connected disability, income, or other special status, such as having been a prisoner of war. Additionally, Congress has stipulated that certain combat veterans discharged from active duty on or after January 2003 are eligible for priority enrollment. VA provides mental health care—for conditions such as PTSD, depression, and substance abuse disorders—in a variety of facilities, including medical centers, community-based outpatient clinics, and rehabilitation treatment programs. These facilities may include both specialty mental health care settings and other settings. Specialty mental health settings, including mental health clinics, primarily provide mental health services. Other settings may provide mental health services but focus primarily on other types of care, such as primary care. VA also provides counseling services that focus on mental health issues through its Vet Centers, a nationwide system of community-based centers that VA established separately from other facilities. The counseling services provided by Vet Centers differ from the mental health services provided by other VA facilities in that they focus on counseling to assist combat veterans in readjusting from wartime military service to civilian life but do not diagnose veterans’ mental health conditions. Veterans needing more acute care—for example, veterans with multiple mental health conditions, such as severe PTSD and depression, or those who pose a risk of harm to themselves or others—are often referred to VA medical centers for diagnosis and treatment. VA groups veterans by dates—or era—of their military service based on provisions in federal law. (See table 1.) VA estimates that as of September 30, 2011, there were approximately 22.2 million living veterans. OEF/OIF veterans represented approximately 12 percent (2.6 million) of that total. Over the 5-year period from fiscal years 2006 through 2010, about 2.1 million unique veterans received mental health care from VA. Each year the number of veterans receiving care increased—from about 900,000 in fiscal year 2006 to about 1.2 million in fiscal year 2010. (See fig. 1.) VA provided this mental health care to veterans in both specialty mental health care and other settings, such as primary care clinics staffed with mental health providers. (See app. II for information on the number of veterans receiving mental health care in specialty mental health care and other settings.) Although the number of veterans receiving mental health care from VA increased for both OEF/OIF veterans and veterans of other eras of service, as shown in figure 1, OEF/OIF veterans accounted for an increasing proportion of the veterans receiving care. Specifically, the proportion of OEF/OIF veterans receiving mental health care from VA out of the total number of veterans receiving mental health care increased from 4 percent in fiscal year 2006 to 12 percent in fiscal year 2010. Nonetheless, veterans from earlier eras, such as Vietnam, accounted for approximately 90 percent of the 2.1 million veterans receiving care at VA over the 5-year period from fiscal years 2006 through 2010, although the proportion decreased from 96 percent in fiscal year 2006 to 88 percent in fiscal year 2010. VA officials indicated that the increasing proportion of OEF/OIF veterans receiving mental health care is not unexpected because of the nature of OEF/OIF veterans’ military service—veterans of this era typically had intense and frequent deployments. In addition, according to VA officials, VA has made changes in its mental health screening protocols that may have resulted in more mental health conditions being diagnosed among veterans entering the VA system. For example, VA requires veterans treated in primary care settings to be screened for mental health conditions such as PTSD, depression, substance abuse disorders, as well as a history of military sexual trauma. Additionally, the 2.1 million veterans receiving mental health care from VA accounted for almost a third of the 7.2 million total unique veterans receiving any type of health care from VA over the 5-year period from fiscal years 2006 through 2010. Specifically, 38 percent of all OEF/OIF veterans and 28 percent of all other veterans receiving any health care during this time period received mental health care. (See fig. 2.) The five most common diagnostic categories for veterans receiving mental health care from VA in fiscal year 2010 were adjustment reaction, depressive disorder, episodic mood disorder, neurotic disorder, and substance abuse disorder. (See table 2.) Within each diagnostic category, there are specific mental health diagnoses; for example, PTSD is one of the diagnoses within the adjustment reaction category. Although veterans of all eras had similar diagnoses, the likelihood of experiencing diagnoses in any one category varied by era. Specifically, almost twice as many OEF/OIF veterans had diagnoses within the adjustment reaction category compared to the next most common diagnostic category—depressive disorder. In comparison, for veterans of all other eras, depressive disorder was the most common diagnostic category, but it was closely followed by adjustment reaction. According to VA officials, the higher relative incidence of adjustment reaction (including PTSD) among OEF/OIF veterans may be due to many factors, including the length and frequency of their deployments and a better understanding of how to identify and diagnose PTSD among mental health care providers. The key barriers we identified from the literature that may hinder veterans from accessing mental health care from VA, which were corroborated through interviews with VA and VSO officials, are stigma, lack of understanding or awareness of mental health care, logistical challenges to accessing mental health care, and concerns about VA’s care. (See table 3 for a description of each of these key barriers.) For example, stigma—negative personal or societal beliefs about mental health conditions or mental health care—may discourage veterans from accessing care. According to VA and VSO officials we spoke with, some veterans may have concerns that if colleagues or employers find out they are receiving mental health care, their careers will be negatively affected. Many of these barriers are not necessarily unique to veterans accessing mental health care from VA, but may affect anyone accessing mental health care from any provider. According to the Substance Abuse and Mental Health Services Administration’s 2008 National Survey on Drug Use and Health, approximately 5 million adults who reported an unmet need for mental health care reported similar barriers. In particular, survey participants cited the following as barriers: a belief that the problem could be handled without care, not knowing where to go for care, and not having the time to go for care. Additionally, according to the literature we reviewed and VA and VSO officials we interviewed, some of these key barriers may affect veterans from different demographic groups differently. For example, veterans may be affected by barriers differently based on age, gender, Reservist or National Guard status, or rural location.  Age: OEF/OIF veterans, who are generally younger than other veterans, may have concerns about VA’s health care system because they perceive that primarily older veterans, such as those who served in Vietnam, go to VA for care. Additionally, some younger veterans may have multiple personal priorities—such as family, school, or work commitments—that make accessing care a lower priority. Older veterans may have different reasons for not accessing mental health care. For example, stigma and beliefs about mental health care may hinder veterans who served in World War II and Korea from accessing care because they grew up during a time when mental health conditions generally were not recognized and accepted. According to a national survey of veterans, as of March 2010, more than 60 percent of all veterans were 55 years of age or older.  Gender: Female veterans may perceive some barriers to accessing mental health care differently than male veterans. For example, some female veterans may not identify themselves as veterans if they did not serve in combat and, as a result, may not access care from VA. In addition, female veterans may have concerns about VA’s health care system because they perceive that the care is male oriented, and therefore, VA is not a place where they feel comfortable receiving mental health care. Female veterans are a growing demographic in the veteran population—from fiscal year 2010 to fiscal year 2020, the percentage of female veterans in the total veteran population is projected to increase from approximately 8 percent to approximately 10 percent, according to VA’s National Center for Veterans Analysis and Statistics. (See app. III for data on the gender of veterans receiving care from VA.)  Reservist or National Guard status: Reservists and National Guard members may be particularly hindered by privacy and confidentiality concerns because they worry that accessing mental health care might have a negative impact on their military or civilian careers. For example, Reservists and National Guard members may not access mental health care because of concerns about military leaders obtaining access to their VA health records and these leaders treating them differently or limiting their career development because they accessed mental health care. As of November 2010, Reservists and National Guard members made up nearly 50 percent of the OEF/OIF veteran population, according to VA data.  Rural location: Veterans who live in rural locations may be particularly hindered by access challenges because of the distance they may have to travel to obtain mental health care. According to the Office of Rural Health, veterans in rural areas are less likely to access mental health services than veterans in urban areas in part because they must travel greater distances to receive care and have more limited public transportation options. According to VA’s Office of Rural Health, as of fiscal year 2010, veterans living in rural areas made up 41 percent of the veterans enrolled in VA’s health care system. VA has expanded options to increase veterans’ access to mental health care and implemented education efforts to help connect veterans with care, according to VA officials. VA has begun integrating mental health care into its primary care settings. Specifically, VA now requires its primary care clinics to conduct mental health screenings and has placed mental health care providers in primary care settings. For example, VA requires veterans treated in primary care settings to be screened for PTSD, depression, substance abuse disorders, and history of military sexual trauma. Further, in 2008, VA began requiring primary care clinics that serve more than 1,500 veterans annually to have mental health providers available on-site, able to serve veterans. Historically, veterans were more limited in the ways they could access VA’s mental health services. For example, some veterans could receive mental health care only if they went to specialty VA mental health facilities, such as mental health clinics. According to VA, from fiscal years 2008 through 2010, the number of unique patients receiving mental health care in a primary care setting doubled. Several VA officials who work in primary care clinics that have integrated primary and mental health care told us that this integration is critical for lowering the stigma of receiving mental health care and for creating an environment of collaboration among providers for discussing veterans’ needs and treatment options. VA also has continued to increase the number of its Vet Centers, which provide confidential and free counseling services to address mental health issues. From fiscal year 2008 to August 2011, VA increased the number of Vet Centers from 232 to 292 and, according to VA, plans to open another 8 before the end of 2011. VA also has expanded the availability of Vet Center services through the use of approximately 70 Mobile Vet Centers—specially equipped vehicles that help bring Vet Center counseling services to more veterans, particularly those in rural areas. Vet Centers are often the first point of contact within VA for veterans and, according to VA and VSO officials, can help veterans overcome barriers to accessing mental health care. For example, many Vet Center counselors have firsthand combat experience, which, according to VA, helps them relate to veterans and reduce the stigma of mental health care that veterans may experience. Additionally, VA has expanded its use of call centers to help connect veterans with counseling services. VA call centers are telephone-based systems through which veterans can access free, confidential counseling services. VA officials said that the call centers are an effective way to reach veterans because discussions with call center staff, many of whom are also veterans, may help callers assess whether they could benefit from mental health care. One call center VA operates, the Veterans Crisis Line, allows veterans and their families to call to receive multiple services, including suicide prevention services, 24 hours a day, 7 days a week. According to VA officials, since the Veterans Crisis Line became operational in 2007, it has received more than 400,000 calls and referred approximately 55,000 veterans to local VA suicide prevention coordinators for same-day or next-day services. In addition to the Veterans Crisis Line, VA officials told us that VA has call centers focused on specific populations, such as combat veterans, homeless veterans, and family members of veterans. Moreover, VA has increased its mental health staff from about 14,000 in fiscal year 2006 to more than 21,000 in fiscal year 2011, according to VA. VA also has expanded the availability of telemental health services, which allow veterans to access mental health care providers remotely through VA medical centers, community-based outpatient clinics, and Mobile Vet Centers. Without telemental health, according to VA, some veterans in rural areas would have to drive as much as 5 hours to the nearest mental health provider, potentially decreasing their access to mental health care. To increase the availability of mental health appointments, as of 2007, VA required its mental health clinics to begin providing “after hours” treatment times, such as early morning, evening, or Saturday morning treatment times, to better accommodate veterans’ schedules, including weekday school or work schedules. Additionally, as of 2007, VA has required that all veterans with mental health referrals be contacted within 24 hours to assess their needs; for nonemergency situations, VA requires that veterans receive follow-up care within 14 days of their referral. To help connect veterans with mental health care, VA has implemented various efforts to educate veterans, veterans’ families, health care providers, and other community stakeholders about mental health conditions and care. VA’s efforts to help connect veterans with mental health care include collaborations with the Department of Defense, redesigned websites, and other technology-based education tools. VA has collaborated with the Department of Defense to educate veterans and active duty servicemembers returning home from deployments about VA benefits, including mental health care, through activities such as Yellow Ribbon Program events and postdeployment health reassessments. According to VA officials, VA has redesigned some of its key mental health websites—including its websites for the Office of Mental Health Services and the National Center for PTSD—to raise awareness of and provide convenient access to some of VA’s mental health services, such as its call centers and resources for locating mental health providers. VA also has developed interactive technology-based tools to help educate veterans about how to recognize the symptoms of mental health conditions and connect with VA mental health care, including web-based self-help applications, mobile phone applications, and social media sites, such as Twitter and Facebook. In addition, VA has developed tailored efforts to educate specific groups of veterans, such as Native American veterans and veterans with serious mental illness. (See table 4 for examples of VA efforts to educate specific groups of veterans.) VA also has efforts to educate veterans’ families about what veterans may be experiencing and how to recognize the possible need for mental health care, according to VA officials. For example, VA has a guide for family members posted on its websites that describes common reactions to being in war, warning signs that a veteran or servicemember might need outside help, and where to go for help. According to VA and VSO officials, veterans’ families are often the first to notice that the veteran is having mental health problems and may be more successful in encouraging the veteran to seek care. Additionally, VA has trainings to teach its primary care physicians how to screen veterans for mental health conditions and have discussions with veterans about what to expect during mental health care. VA also has trainings for its providers covering topics such as the assessment and treatment of PTSD or military sexual trauma. According to VA, these types of trainings are important because primary care physicians are often a first point of contact for veterans who might benefit from VA mental health care. Additionally, the trainings help educate mental health care providers about evidence-based mental health practices, including issues regarding gender differences and cultural competencies. For example, according to VA, its National Center for PTSD offers web-based training intended to enhance VA staff sensitivity to, and knowledge of, specific health care needs affecting women veterans. VA also has developed efforts to educate other community stakeholders, including law enforcement personnel, chaplains, and employers, about veterans’ mental health conditions and VA mental health care. For example, VA has a program that helps law enforcement personnel identify veterans with mental health conditions and connect these veterans to appropriate mental health treatment options. The literature shows that some veterans’ mental health conditions have been found to increase their likelihood of entering or reentering the criminal justice system. VA also has developed a series of training conferences for chaplains and clergy to educate them to recognize the symptoms of PTSD and other service-related mental health conditions and to refer veterans to VA for care. According to VA, training chaplains and clergy to recognize the symptoms of mental health conditions is important because they are often a first point of contact for veterans in need of assistance. To support employers who may interact with veterans who have mental health conditions, VA has developed a set of online resources, including information on postdeployment mental health issues and information on mental health care available through VA. We provided a draft of this report to VA for comment. In its response, which is reprinted in appendix IV, VA provided technical comments, which we have incorporated as appropriate. We are sending a copy of this report to the appropriate congressional committees and the Secretary of Veterans Affairs. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To determine how many veterans received mental health care from the Department of Veterans Affairs (VA) from fiscal years 2006 through 2010, we obtained data from VA’s Northeast Program Evaluation Center (NEPEC). NEPEC used VA’s administrative data files, which include inpatient and outpatient files, to generate counts of the number of veterans who received mental health care. For the purposes of this report, we defined mental health care as the care provided to veterans with mental health conditions. A veteran was counted as having a mental health condition if, at any point in the fiscal year, his or her medical record indicated at least two outpatient encounters with any mental health diagnosis (with at least one encounter having a primary mental health diagnosis) or an inpatient stay in which the veteran had any mental health diagnosis. Additionally, the number of veterans represents a unique count of veterans; veterans were counted only once, even if they received care multiple times during a fiscal year or across the 5-year period. NEPEC also used VA administrative data files to provide us with data on the total number of veterans receiving any health care at VA—not just veterans receiving mental health care. The number of veterans includes former active duty servicemembers, including Reservists and National Guard members. NEPEC’s data on the number of veterans receiving mental health care included breakouts by specific demographic groups, such as era of service; by the type of setting where care was provided; and by the mental health diagnostic category. For the era of service data, NEPEC identified two groups of veterans: (1) veterans serving in the Operations Enduring Freedom (OEF) and Iraqi Freedom (OIF) era and (2) veterans from all other eras—including peacetime. Because OEF/OIF veterans are not tracked separately from Persian Gulf War veterans in VA’s administrative data files, NEPEC used Department of Defense data to identify OEF/OIF veterans from the total population of veterans in the VA data. The non-OEF/OIF veterans in the VA data comprised the veterans from all other eras. Veterans who served in more than one era of service were assigned based on their most recent era of service. NEPEC also provided data on the settings where care was provided—that is, specialty mental health care settings that primarily provided mental health services or other settings that may have provided some mental health services but focus primarily on other types of care, such as primary care. Furthermore, NEPEC provided data on the top five mental health diagnostic categories. The most common diagnostic categories were determined based on the number of veterans with diagnoses included in the diagnostic category, not the number of visits associated with the diagnoses. To assess the reliability of the data NEPEC provided us, we discussed with NEPEC officials their methodology and data collection techniques used for obtaining and using the data, the data checks that NEPEC performed, as well as any limitations officials identified in the data. In addition, we did our own review of NEPEC’s programming and methodological approaches using data file documentation, code book and file dictionaries, and programming logs NEPEC officials provided. We determined that the data were sufficiently reliable for our purposes. The data on veterans receiving care from VA are not necessarily representative of the entire veteran population because some veterans receive care outside of VA. To identify the key barriers that may hinder veterans from accessing mental health care from VA, we searched research databases, such as MEDLINE and PsycINFO, that included peer-reviewed journals to capture relevant literature published on or between January 1, 2006, and March 3, 2011. We searched these databases for articles with key words in their titles or subject terms related to veterans, mental health, and barriers. In addition, we also reviewed relevant literature that was cited in articles from our original search or recommended to us during the course of our research. To corroborate the barriers identified in the literature, we interviewed officials from (1) several VA offices—the Office of Mental Health Services, the Office of Mental Health Operations, the Office of Rural Health, the Office of Research and Development, and Readjustment Counseling Services; (2) several mental health–focused VA research centers—the Mental Illness Research, Education and Clinical Center, the Serious Mental Illness Treatment Resource and Evaluation Center, the Center for Chronic Disease Outcomes Research, and the National Center for PTSD; (3) several VA mental health and primary care providers; and (4) a judgmental sample of veterans service organizations (VSO). We defined “key barriers” as those that the majority of VA and VSO officials we interviewed said could have the greatest impact on veterans. As a result, we do not report an exhaustive list of all possible barriers that veterans may face. To identify the efforts VA has implemented to increase veterans’ access to VA mental health care, we interviewed officials from the same VA offices and mental health–focused VA research centers that we interviewed to corroborate the barriers for veterans. We also reviewed supporting VA documentation, such as program descriptions, policy directives, and congressional budget justifications. We compiled a list of efforts by focusing on the efforts that had been implemented and were national in scope. As a result, the list of efforts we report is not an exhaustive list of all VA efforts. In addition, we did not assess the extent to which VA has fully implemented these efforts or their effectiveness, including the extent to which the efforts eliminate or diminish barriers that may hinder veterans from accessing mental health care. We conducted our work from November 2010 to October 2011 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions. In addition to the contact named above, Janina Austin, Assistant Director; Jennie F. Apter; Eleanor M. Cambridge; Kathleen Diamond; Lisa Motley; Monica Perez-Nelson; Karin Wallestad; and Suzanne Worth made key contributions to this report.
In fiscal year 2010, the Department of Veterans Affairs (VA) provided health care to about 5.2 million veterans. Recent legislation has increased many Operations Enduring Freedom (OEF) and Iraqi Freedom (OIF) veterans' priority for accessing VA's health care, and concerns have been raised about the extent to which VA is providing mental health care to eligible veterans of all eras. There also are concerns that barriers may hinder some veterans from accessing needed mental health care. GAO was asked to provide information on veterans who receive mental health care from VA. In this report, GAO provides information on (1) how many veterans received mental health care from VA from fiscal years 2006 through 2010, (2) key barriers that may hinder veterans from accessing mental health care from VA, and (3) VA efforts to increase veterans' access to VA mental health care. GAO obtained data from VA's Northeast Program Evaluation Center (NEPEC) on the number of veterans who received mental health care from VA. The number of veterans represents a unique count of veterans; veterans were counted only once, even if they received care multiple times during a fiscal year or across the 5-year period. GAO also reviewed literature published from 2006 to 2011, reviewed VA documents, and interviewed officials from VA and veterans service organizations (VSO). Over the 5-year period from fiscal years 2006 through 2010, about 2.1 million unique veterans received mental health care from VA. Each year the number of veterans receiving mental health care increased, from about 900,000 in fiscal year 2006 to about 1.2 million in fiscal year 2010. OEF/OIF veterans accounted for an increasing proportion of veterans receiving care during this period. The key barriers identified from the literature that may hinder veterans from accessing mental health care from VA, which were corroborated through interviews, are stigma, lack of understanding or awareness of mental health care, logistical challenges to accessing mental health care, and concerns about VA's care, such as concerns that VA's services are primarily for older veterans. Many of these barriers are not necessarily unique to veterans accessing mental health care from VA, but may affect anyone accessing mental health care from any provider. Veterans may be affected by barriers differently based on demographic factors, such as age and gender. For example, younger OEF/OIF veterans and female veterans may perceive that VA's services are primarily for someone else, such as older veterans or male veterans. VA has implemented several efforts to increase veterans' access to mental health care, including integrating mental health care into primary care. VA also has implemented efforts to educate veterans, their families, health care providers, and other community stakeholders about mental health conditions and VA's mental health care. According to VA officials, these efforts help get veterans into care by reducing, and in some cases eliminating, the barriers that may hinder them from accessing care. GAO provided a draft of this report to VA for comment. In its response, VA provided technical comments, which were incorporated as appropriate.
According to USDA, SECD are characterized by an acute, rapidly spreading viral diarrhea of pigs. No other species, including humans, are known to be affected, and these diseases are not a direct public health threat. Pigs develop varying degrees of diarrhea and loss of appetite depending upon the age of the pig infected. Piglets are the most severely affected and have the highest mortality rates (50 to 80 percent), while growing and adult pigs have the lowest rates (approximately 1 to 3 percent). PED was first recognized in England in 1971 and has been known to exist in China since 1973. According to USDA’s SECD case definition document, China has seen a large increase in outbreaks since 2010, and the emergence of new strains of PED have been attributed to this increase. USDA’s case definition document also provides information about the presence or suspected presence of PED and PDCoV in the United States and other countries. For example, the first outbreak of PED in the United States was reported in May 2013. PED has also been reported in Mexico and Canada, as of August 2013 and January 2014, respectively. Further, as of June 2014, PED is thought to be widespread throughout most regions of Western and Central Europe and Southeast Asian countries. In addition, PED is suspected in parts of South America. PDCoV is more recent and less widespread; it was first reported in China in 2012, in the United States in January 2014, and later in Canada according to the document. USDA’s mission includes protecting and improving the health, quality, and marketability of our nation’s animals and animal products by working to prevent, control, or eliminate animal diseases, and by monitoring and promoting animal health and productivity. USDA’s overall budget request was $23 billion in fiscal year 2015, of which $287 million was budgeted for the agency’s animal health efforts, including disease response. USDA comprises multiple organizations that support its animal health mission; see table 1 on the next page for selected organizations and their specific missions and roles. In carrying out its animal health mission, USDA participates in surveillance and preparedness, as well as response efforts for animal diseases. Surveillance activities can be conducted to monitor animal health, or in response to a specific disease. Animal disease surveillance consists of collecting, analyzing, and interpreting animal health data to detect diseases early, enable rapid reporting and response during disease outbreaks, and control the spread of disease. According to USDA guidance, the agency also can use such data for accurate risk analysis, which includes assessing present, future, and emerging threats to animal health, and estimating the likelihood of a damaging event and the resulting consequences. As part of the agency’s preparedness and response efforts, USDA has identified certain animal diseases that pose a risk and must be reported if they occur in the United States. Data collected on these diseases are used to estimate their geographic distribution and severity, which inform officials’ response efforts. For example, USDA established program diseases to control or eradicate specific diseases that must be reported to federal and state animal health officials. The agency works with federal- state-industry stakeholders to control or eradicate these diseases. USDA describes program diseases as serious zoonotic diseases, diseases that are economically important, or diseases of concern to the livestock, poultry, or aquaculture industries. Among these program diseases, some are designated as foreign animal diseases, which, in addition to being reported to USDA must be reported to the international community. USDA defines a foreign animal disease as a terrestrial animal disease or pest, or an aquatic animal disease or pest, not known to exist in the United States or its territories. A foreign animal disease may involve livestock, poultry, wildlife, or other animals. The World Organisation for Animal Health (formerly known as the Office of International Epizooties or, more commonly by its previous acronym, OIE) develops the list of internationally reportable animal diseases. This list is used by OIE’s 180 member countries when determining trade restrictions on animals or animal products that pose a risk to their agricultural industries. According to USDA’s guidance about animal diseases, one of the most immediate and severe consequences of an incident of an OIE-Listed animal disease in the United States is the loss of export markets. For example, according to the USDA’s Economic Research Service website, as a result of the current outbreak of highly pathogenic avian influenza—an OIE-Listed disease—as of June 2015, 15 countries, including China, Russia, and South Korea, have banned poultry imports from the United States with many other countries placing bans on U.S. states or regions. Rapid response to diseases can prevent or limit sudden, negative consequences for animal health, economic security, and food security. Additionally, a rapid response can help normal production to resume as quickly as possible. When deciding on and implementing actions to respond to outbreaks of animal disease, USDA collaborates with other federal agencies, state officials, and with industry. For example, USDA works with FDA, which, among other things, is responsible for ensuring the safety of feed, to investigate potential feed contamination; state animal health officials and state departments of agriculture to assist in disease control efforts such as data collection; and industry to implement biosecurity practices that are critical to limiting disease entry and spread. For example, diseases can be introduced or spread to healthy animals via footwear and outerwear, but biosecurity practices such as changing or covering these items before entering premises can help prevent the introduction of disease. Similarly, changing or covering these items after working with infected animals can prevent the spread of disease. According to USDA’s Animal and Plant Health Inspection Service’s strategic plan, collaborative efforts are thought to produce more public value than any single agency could produce. Components of these efforts include the identification of roles and responsibilities and mutually agreed upon common outcomes, such as the control or eradication of a disease, as well as joint strategies for achieving the agreed-upon outcome. USDA did not take regulatory action during the initial response to the SECD outbreak, beginning in May 2013 when the PED virus was first detected, because it did not believe then that such action was necessary to manage the outbreak. By not taking regulatory action, USDA had limited information about the initial geographic distribution of the diseases; their modes of spread; and the locations of the first infected herds, which could have helped identify the source of entry of the diseases in the United States. USDA did not take regulatory action during the initial response when SECD were identified in May 2013. According to USDA officials, the agency was reluctant—and did not believe it was necessary—to take regulatory action, such as requiring reporting of infected herds or restricting the movement of pigs. Such action could have had negative financial impacts on the swine industry, according to USDA documents. Instead, the agency initially supported swine industry-led efforts to address SECD. Moving Pigs Pigs are often moved among multiple premises at different stages of their life spans to accommodate their growth in size. Typically, pigs are moved by truck and trailer as shown in the picture below. Additionally, the U.S. Department of Agriculture (USDA) estimates that more than 600,000 pigs are transported to slaughter on any given day in the United States. According to industry representatives and USDA, movement restrictions such as a quarantine lasting more than a week could potentially result in euthanasia of hundreds of thousands or millions of animals, depending on how long the quarantine was in place, since premises may not be able to humanely house pigs larger than they customarily handle. The agency’s decision not to take regulatory action took into account several factors, including that these diseases were not listed as internationally reportable animal diseases, do not pose a threat to people, and were not lethal to all pigs. If USDA had designated these as foreign animal diseases within the United States, the agency might have been expected to impose quarantines, and other countries might have restricted the importation of pigs or pig products. According to USDA guidance for reportable and foreign animal diseases, import restrictions could potentially have severe consequences because U.S. animal agricultural industries are becoming more dependent on exports, and the long-term strategic plans of these industries call for increasing the amount of goods sold abroad. Porcine Epidemic Diarrhea Virus (PED) Strains The U.S. Department of Agriculture and other researchers have reported on more than one strain of PED presently in the United States. According to recent research, two strains identified in the United States closely resemble the strains of PED virus circulating in China; however, genetic resemblance does not indicate how the virus arrived in the United States. USDA officials told us that, at that time of their decision, varying strains of PED were known and active around the world, and the agency and the swine industry were aware of how PED spread (via fecal contamination). At that time, USDA believed the best course of action was for industry to manage SECD according to an agency announcement about the diseases. USDA officials explained that industry was already leading the response to other swine diseases, such as transmissible gastroenteritis. The initial response by industry and USDA to the SECD outbreak included efforts to learn more about these diseases. Within 3 months of the first PED diagnosis, one of the main swine industry associations— the National Pork Board—made $800,000 available for research to learn about PED and potential ways to control the disease, such as through promoting maternal (sow) immunity. USDA provided support and collaborated with industry associations in the response. Initial agency support included providing diagnostic support through its National Veterinary Services Laboratories to the National Animal Health Laboratory Network; providing funding for and participating with industry associations in investigations of herds that became infected without an obvious reason, such as a newly infected herd in a remote area that had no clear connections to another infected herd; and compiling and reporting to industry associations positive testing results (indicating infected herds) that were voluntarily reported by veterinarians or producers to the National Animal Health Laboratory Network laboratories. USDA also funded SECD-related research through the regular annual grant cycle of its National Institute of Food and Agriculture, as well as within the Agriculture Research Service. According to several federal, state, industry, and academic stakeholders we interviewed, research funding is important. Several stakeholders explained both industry and federal funding are important to promote research because they typically have different objectives. Generally, industry focuses on research with near-term applicability for producers, such as identifying which disinfectants are most effective in killing viruses. USDA generally supports research that is more broadly intended to further understanding of animal diseases. For example, past USDA work led to a diagnostic tool that was used to confirm the first identification of PED in the United States. The protocol for this tool was provided to the National Animal Health Laboratory Network. This protocol helped veterinary diagnostic laboratories participating in this network develop faster diagnostic tools, which they currently use to identify SECD. Because USDA did not take regulatory action, the agency had limited information about the initial geographic distribution of the diseases; their modes of spread; and the locations of the first infected herds, which could have helped identify the source of entry of the diseases in the United States. Further, in part because USDA did not have information about locations of the first infected herds, it did not investigate the first outbreak of SECD at the onset, and the source of entry of SECD into the United States will likely never be determined. At the onset of SECD in the United States, USDA did not know the geographic distribution because disease reporting was incomplete. State veterinary diagnostic laboratories and swine veterinarians initially identified these diseases and provided USDA with limited information on their geographic distribution by state, but not by premises. According to USDA’s Chief Epidemiologist, location information is an important component in understanding how the disease is spread and how to prevent diseases and mitigate their spread. However, USDA did not initially require reporting of infected herds or the exact location of these herds, and swine producers were reluctant to voluntarily share this information with USDA. USDA officials, swine veterinarians, and industry representatives that we interviewed believed that producers’ reluctance stemmed partly from concern about whether USDA had the ability or procedures in place to maintain confidentiality of this information. Swine veterinarians we interviewed told us that producers were also concerned about public perception of these diseases based on past experience with other diseases. Specifically, in 2009, a novel influenza virus with origins in pigs caused a worldwide epidemic and led to substantial losses in pork sales when consumers mistakenly believed they could become infected by eating pork. Swine Enteric Coronavirus Diseases (SECD) Spread GAO’s review of literature found that there are multiple likely modes for the spread of SECD. Specifically, these studies found that SECD could likely be spread by transport vehicles, people, feed, and air. For example, people involved in transporting pigs can potentially spread virus on their clothing and boots from one location to another. In addition, employees and veterinarians in direct contact with pigs, service people delivering feed or water, maintenance workers, and others who visit premises can carry the virus onto and off of the premises, spreading it inadvertently. As demonstrated in the image below, the National Pork Board recommends biosecurity practices such as changing outerwear and footwear before entering premises with pigs to help mitigate the risk of spreading diseases. For further discussion of the literature review, see appendix I. According to USDA’s summary of SECD testing results, the information USDA received voluntarily from laboratories in the early outbreak of PED, and later for PDCoV, was not useful for determining the number of infected herds. In particular, swine producers did not consistently share information on the location of their premises when submitting samples for testing, and some producers submitted multiple samples at various times from the same premises in an effort to determine if SECD had been eradicated. Without complete location information, USDA did not know if these test samples were for diagnosing potentially newly infected herds, or for retesting herds that had previously tested positive. Additionally, in some cases, the results reported to USDA contained inaccuracies, such as incorrect state locations of the infected herds. For example, USDA officials explained that a swine-producing company could be based in one state, but the premises on which the sample was collected might have been in another state. Further, test samples provided by producers to diagnostic laboratories did not identify the type of swine infected, such as breeding sows, piglets, or pigs ready for slaughter—information useful for understanding the type of animal most susceptible and the impact of these diseases on industry. The laboratories also did not report to USDA the results of negative tests until November 2013, which could have demonstrated where the diseases were not occurring or where the disease occurrence might have been declining. The limited information USDA received, while incomplete, suggested that the diseases were quickly spreading to multiple states. Specifically, PED was initially diagnosed in 3 states (Indiana, Iowa, and Ohio) in May 2013. The laboratories then diagnosed PED in 10 additional states through June 2013 and in about 30 states total through May 2014. Similarly, the limited information for PDCoV suggested that it was present in the same 3 original states as PED in January 2014 (Indiana, Iowa, and Ohio) and had spread to at least 11 additional states through May 2014. Beginning in May 2013, USDA collaborated with universities and industry associations on various efforts to understand how SECD may have spread in the United States. In one instance, USDA personnel provided statistical and technical support for university-based research on the possible airborne transmission of PED and geographic clustering of positive sites. In another effort, USDA personnel contributed to questionnaire development and analysis for a nationwide survey by swine veterinarians of selected swine producers with PED-infected herds; this effort found feed could have been a potential factor in the spread of the disease. In addition, for PDCoV, USDA participated with industry in investigations in April 2014 at premises where the disease was diagnosed in the United States. Porcine Epidemic Diarrhea Virus (PED) in Canada PED was first reported in Canada in January 2014, 8 months after it was first reported in the United States. Canadian provincial and federal government officials, with help from private swine veterinarians, conducted an immediate epidemiological investigation on the source of the PED and found an association between cases of PED in Canada and a feed ingredient (spray-dried porcine blood plasma) from a U.S. feed distributor. According to Canadian federal government officials, the feed distributor voluntarily removed this feed from the market and the officials believe this change helped reduce the spread of PED in Canada. USDA officials explained that there were other actions the agency could have taken to potentially limit the spread. For example, the agency could have imposed a temporary quarantine, restricting movement for a few days. However, these officials told us that, in their opinions, quarantine, depending on its length, could have had negative financial consequences to swine producers. Further, the officials told us that agency actions, such as quarantine, may not have been able to prevent the spread based on the pig industry’s reliance on movement. The diseases are highly infectious and, by the time the first SECD was identified in May 2013, diagnostic laboratories later determined that PED had already been spreading, and multiple herds were infected as early as April 2013. After observing what happened in the United States, officials in at least one Canadian province worked with producers to establish voluntary movement restrictions to limit the spread in Canada, according to Canadian swine industry and government officials. Additionally, several Canadian provinces required reporting of infected herds including premises identification numbers. In the United States, these numbers serve as a way to track a farm (or premises) without using specific location information, such as longitude and latitude coordinates or a postal address, to help protect the privacy of the producer. USDA officials told us they missed an opportunity to conduct in-depth epidemiological outbreak investigations at the premises where PED was first diagnosed. Such investigations can help identify how a disease may have entered specific premises and, thus, may help determine how the disease entered the United States. USDA’s investigation guidance for emerging animal disease incidents in effect during the first outbreak stated that collecting and analyzing epidemiological information is a critical element to an investigation. Had USDA followed this guidance, it would have conducted key steps of an outbreak investigation at the onset of the first cases, such as interviewing persons for incident history on the first infected premises near the time of the initial diagnosis and collecting and analyzing other epidemiological data for those incidents. Such an investigation does not guarantee that the source of the outbreak will be determined, but it typically provides some information necessary for such a determination. USDA’s Chief Epidemiologist told us that timely outbreak investigations at the first infected premises would have been helpful in collecting more information on the source of entry. Because USDA did not follow its investigation guidance, the source of entry of PED into the United States will likely never be determined. When asked why USDA did not conduct a timely outbreak investigation at the premises where PED was first diagnosed, senior USDA officials told us that the agency did not have information about locations of the first infected herds—information that would have been gained through regulatory action to require reporting. In addition, agency officials said USDA learned about the disease after it was identified by a laboratory. Officials explained that typically USDA applies its investigation guidance when the agency is contacted to help identify an unknown disease on the premises. One senior USDA official explained that, in the case of PED, USDA chose not to follow its investigation guidance since the disease was already identified—even though the guidance does not provide for an exception in such cases. USDA’s investigation guidance states that employees may not deviate from the directions provided without appropriate justification and supervisory concurrence. USDA officials acknowledged that they did not follow the guidance. USDA currently does not have a process in place that would help ensure this guidance is followed. According to federal standards for internal control, internal control activities help ensure that management directives, such as those incorporated in the investigation guidance, are carried out. The control activities should be effective and efficient in accomplishing the agency’s control objectives. One example of a control activity would be establishing a process for documentation of the justification and approval of any deviation from the directions. Without appropriate control activities, USDA cannot have reasonable assurance that the guidance will be followed in future outbreaks. Amid mounting concerns about the spread of the diseases and the associated economic losses, USDA took additional actions to manage SECD beginning in June 2014. In particular, USDA issued a federal order imposing reporting and planning requirements, and it provided financial assistance to states and producers. USDA cites progress in addressing SECD, but stakeholders we interviewed have questioned the usefulness of some of USDA’s actions. In addition, USDA is retrospectively conducting a study of potential ways PED could have entered the United States and has identified potential preventive strategies based on its findings. In June 2014, USDA issued a federal order to help manage the diseases. According to USDA documents, the order followed a winter in which SECD appeared to spread at increasing rates, leading to mounting producer concerns about economic losses from pig deaths. The order includes two basic requirements that remain in place at the time of this report. First, it requires anyone with knowledge of the diseases, including producers, veterinarians, and diagnostic laboratories to report all new SECD incidents to USDA or state animal health officials, providing specific information, including premises identification numbers. It also requires that, before a herd is considered confirmed positive for SECD, the herd with positive testing results also has at least one case of a pig with a history of clinical signs consistent with SECD. According to USDA, routine, standard reporting for SECD helps determine the magnitude of the diseases in the United States and documents progress in managing the diseases. Second, the order requires producers reporting SECD incidents to work with a veterinarian—either their herd veterinarians, or USDA or state animal health officials—to develop herd management plans. These herd management plans, which must be submitted to USDA, list biosecurity practices that the producers will follow to control the spread of disease. USDA also provided approximately $26 million for a variety of activities to help manage SECD. This funding was budgeted for, among other things, cooperative agreements with state animal health offices to support SECD management and control activities related to required reporting; financial assistance for diagnostic testing to determine the presence or absence of SECD; financial assistance for veterinarians developing the required herd management plans—specifically, reimbursement of $150 per plan; financial assistance to producers for biosecurity practices, specifically for purchasing disinfectants for transportation trucks and premises; efforts to develop vaccines; genomic sequencing of the viruses that cause SECD to better understand their characteristics; and internal USDA SECD-related activities, such as staff time for SECD reporting and working with stakeholders. Following its regulatory actions and provision of funding related to SECD, USDA announced progress in managing these diseases. However, stakeholders we interviewed raised concerns about the usefulness of some aspects of USDA’s efforts to address SECD. USDA officials explained that, in response to some of these concerns, they have shifted funding to activities that stakeholders found more useful. In December 2014, USDA announced in a public statement that it had made progress addressing SECD and was receiving more accurate and timely information about SECD-infected herds and their locations. According to USDA’s announcement, this information allowed animal health officials to better understand how the diseases spread and what measures have been most effective in containing them. More specifically, USDA noted that it had achieved the following in reporting and managing SECD: received information quickly and electronically through an improved information technology network with the laboratories, allowing federal and state health officials to better understand the spread of an animal disease outbreak in nearly real time; increased the number of diagnostic tests submitted that include the premises identification number, allowing for more accurate monitoring of current disease incidence and spread; granted two conditional licenses for vaccines developed for SECD; improved its ability to detect new viruses and changes to existing viruses through genomic sequencing. Several state, industry, and federal stakeholders we interviewed told us that providing financial assistance for diagnostic testing and requiring reporting were the most important USDA actions for helping to manage these diseases. Several stakeholders also said that these actions improved information available about the geographic distribution of SECD. According to USDA’s Chief Epidemiologist, frequent diagnostic testing demonstrated whether biosecurity practices to reduce the risk of spread were working. The federal order’s requirement to report positive SECD incidents requiring specific information, including premises identification numbers, has resulted in USDA having more accurate data on the frequency and date of new SECD incidents, and on the location of infected herds. According to a USDA report on these diseases, the agency can use premises identification numbers to identify whether a herd was previously reported as infected to avoid double counting infected herds, and USDA officials commented that the required information also assists officials in contacting producers to confirm clinical signs of illness. USDA can now more accurately report on the number of new infections and on infections by state. As shown in figures 1 and 2 on the next pages, from June 5, 2014, through September 5, 2015, PED- infected premises have been confirmed in 28 U.S. states and 1 U.S. territory and PDCoV-infected premises have been confirmed in 15 U.S. states. USDA reported in September 2015 that, since June 2014, cumulatively, 1,599 premises have been confirmed as having herds infected with SECD. More specifically, within these premises 1,468 herds have been infected with PED, 73 herds have been infected with PDCoV, and 58 herds have been infected with both PED and PDCoV. From June 2014 through September 2015, about 40 percent of infected U.S. herds were in Iowa, the top swine-producing state. Using industry estimates, USDA reported in June 2014, that these diseases have caused approximately 7 million pig deaths, mainly among piglets, in the United States, with PED causing the majority of these deaths. Some aspects of USDA’s efforts to address SECD were not as well received as the financial assistance for diagnostic testing and the reporting requirements, according to stakeholders in our review. For example, stakeholders we interviewed told us veterinarians did not seek the $150 dollar financial assistance USDA offered for each herd management plan developed to help control the spread of disease on premises with SECD-infected herd because the effort associated with obtaining the assistance was not worth the amount received. According to data provided by USDA, less than 16 percent of the funds originally budgeted for reimbursing veterinarians had been obligated as of August 2015. Additionally, according to several state officials and industry representatives and a USDA official responsible for the funds, many producers were not applying for financial assistance for their biosecurity practices. As of August 2015, less than $1 million of the $11.2 million initially budgeted for biosecurity payments had been obligated according to the data USDA provided. State officials and industry representatives explained that producers were not seeking this assistance for various reasons, including that the effort associated with obtaining financial assistance was not worth the amount received; the assistance was limited to purchasing disinfectants; and the assistance was available only to producers with herds that tested positive after the federal order was issued. USDA officials explained that they, therefore, shifted funds from this category to cover other activities, such as diagnostic testing, in response to stakeholder concerns. USDA initially budgeted $2.4 million for diagnostic testing. However, according to USDA officials, laboratories and industry representatives requested that the agency treat SECD diagnostic testing similar to testing for other reportable animal diseases and reimburse diagnostic laboratories for all tests, not only tests with positive results for infection. As a result, USDA increased the amount of financial assistance available for diagnostic testing within a few weeks of announcing its funding for SECD activities and, as of August 2015, about $10 million has been obligated for diagnostic testing. See table 2 for the initial funding amounts for SECD activities and obligated amounts by activity as of August 2015. Regarding other USDA actions, nonfederal stakeholders—including industry representatives, private veterinarians, and academics—told us that further work is needed for the conditionally approved PED vaccines to be effective in preventing new herds from getting infected. As a result, some stakeholders we interviewed said these conditional vaccines are used on a limited basis or in conjunction with a traditional disease control method known as “feedback.” However, immunity from feedback or vaccines is not lifelong, and reoccurrences may occur. In addition, some state animal health officials noted that it would be helpful if USDA’s information system for tracking infected herds automatically notified them of new infections in their state, similar to how USDA’s officials are notified. Some of these state officials explained that automatic notification could assist them in conducting their disease response activities. For example, officials may need to collect additional information from producers of infected herds to complete documentation of herd management plans and obtain additional samples to monitor herds’ disease status. Currently, after the first incident in the state, to learn about subsequent incidents, a state official could either access USDA’s information system to check if there have been any new incidents within the state or could contact each laboratory to request direct notification of new incidents, according to USDA officials we interviewed. While USDA was unable to definitively identify how either PED or PDCoV entered the United States, in September 2015, USDA released a retrospective study of numerous potential ways SECD could have entered the United States. This retrospective root cause study began almost a year after laboratories diagnosed PED in the first known infected herd. This retrospective root cause study indicates that the use of transport carrier totes is the most plausible potential source of entry based on the criteria the agency used when evaluating how PED may have entered the United States. These totes are large, flexible sacks with a capacity of more than 1,000 pounds that are used to carry dry products. USDA determined that these totes were generally not cleaned before being reused for a number of purposes, including distributing pet food treats and shipping pig feed ingredients, such as organic soybeans, to the United States. The study explained that organic soybeans are a product imported from China that can be fertilized with swine manure and are frequently shipped in totes. Because the study identified totes as a potential gap in U.S. border biosecurity, USDA has initiated further research into totes to provide evidence to support the study’s findings. Specifically, USDA is conducting tests to confirm that cross contamination between the totes and feed ingredients can occur and that the virus can survive during long transit times. According to a USDA official, the results demonstrate that the PED virus can survive on the totes for at least 5 weeks at room temperature and at least 10 weeks at 39 degrees Fahrenheit. According to the retrospective root cause study, the agency is also working with the Department of Homeland Security’s U.S. Customs and Border Protection to test samples of organic soybean shipments to determine whether they are a possible source of PED virus. The retrospective root cause study also identified two preventive strategies that could mitigate the potential risk related to totes: (1) not reusing these totes or (2) identifying appropriate cleaning and disinfection procedures for the totes before their reuse to transport products into the United States. USDA has communicated findings of the study to FDA, which, among other things, is responsible for ensuring the safety of feed, and to stakeholders in the feed and swine industry, who, according to USDA, could mitigate risks prior to exposure of animals. To improve its future response to emerging animal diseases, USDA has drafted new guidance and a proposed list of reportable diseases but has not defined key aspects of its response. More specifically, USDA has drafted guidance for responding to emerging animal diseases and has proposed a comprehensive list of animal diseases that must be reported by anyone with knowledge of the diseases. However, USDA has not defined roles and responsibilities or criteria for actions that are included in its response to emerging diseases. USDA has drafted new guidance for responding to emerging animal diseases; according to a USDA summary document, the agency developed this guidance as a result of its experience with SECD and to improve the agency’s response to future diseases. USDA made this guidance available for public comment from October 16, 2014, through January 16, 2015. The draft guidance describes USDA’s goals for addressing emerging diseases as (1) undertake global awareness of, assessment of, and preparedness for animal diseases or pathogens not currently in the United States that may be of animal or public health concern or have trade implications; (2) detect, identify, and characterize disease events; (3) communicate findings and inform stakeholders; and (4) respond quickly to minimize the impact of disease events. This draft guidance also refers to existing USDA guidance for conducting investigations and reporting results for emerging animal disease events. In conjunction with the draft guidance, USDA also released a “Proposed National List of Reportable Animal Diseases.” According to USDA’s description of the proposed list, it is intended to, among other things, facilitate the response to an emerging animal disease in the United States. In the list, USDA identifies specific animal diseases and their proposed monitoring and reporting requirements, which would also apply to emerging animal diseases either currently on the list or newly identified. Any individuals, including producers and laboratory personnel, who have any knowledge of an incident of any of the listed diseases that USDA categorizes as “notifiable” would be required to comply with these reporting requirements; currently no individuals beyond accredited veterinarians are specifically required to report to USDA, according to the list. Agency officials we interviewed told us that expanding the reporting requirement to all knowledgeable individuals closes a reporting gap for disease incidents where no accredited veterinarian examined the animal or conducted the testing. USDA’s draft guidance has not defined or communicated key aspects of its response to emerging diseases, including when the agency would take a lead role, what the agency’s responsibilities would be, and examples of what circumstances may trigger actions such as euthanasia or quarantines. In contrast, USDA has defined and communicated such aspects of its response to foreign animal diseases. For example, USDA’s guidance for responding to foreign animal diseases provides information on roles and responsibilities, the scope of regulatory intervention, the criteria used in the selection of a response strategy, and examples of actions taken under different strategies. In addition, for several foreign animal diseases, USDA has created specific response plans that include examples of different types of responses to different levels of outbreaks. According to a senior USDA official who was involved in the drafting of the emerging diseases draft guidance, this guidance is intended to be broadly applicable and not as detailed as the guidance for foreign animal diseases. In addition, the characteristics of each emerging disease could vary dramatically, and creating a decision tree, for example, to show what actions to take could be difficult because of a high number of different potential scenarios. We recognize that defining the response to every emerging disease can be challenging because of the many unknowns. However, in its draft guidance, the agency has not included general information on key aspects of its response to emerging diseases, such as roles and responsibilities of the various involved stakeholders, potential response strategies, and what may trigger different types of actions. This information could facilitate rapid, effective decision making. USDA’s Animal and Plant Health Inspection Service stated in its 2015- 2019 Strategic Plan that protecting the health, welfare, and value of America’s agriculture and natural resources requires coordinated and collaborative efforts, and that identifying roles and responsibilities is a key component of successful collaboration. Agency officials we interviewed agreed that they believe developing additional information on roles and responsibilities, potential response strategies, and what may trigger different types of actions would be feasible and useful. In a summary document that USDA released about its draft guidance on responding to emerging diseases, USDA noted that, for SECD, the options for responding to these diseases and how decisions would be made were not clear. Industry representatives we interviewed said that they did not know how USDA would address the diseases and what role industry would have, which led to concerns about sharing information about SECD incidents with USDA and how USDA would use this information. Without more information on USDA’s approach, the representatives said they may not be receptive to USDA taking the lead in addressing future emerging animal diseases. The National Pork Board announced in November 2014 that it would provide $15 million for a swine health information center to better prepare industry for the next emerging swine disease. Absence of a clearly defined agency response to emerging animal diseases is also inconsistent with federal standards for internal control. USDA guidance primarily lists the goals of USDA in responding to emerging diseases and does little to explain how these goals will be achieved. Under these federal standards, control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives; these activities help ensure that actions are taken to address risks. Appropriate documentation is an example of a control activity. The standards state that internal control needs to be clearly documented, and the documentation should be readily available for examination. The documentation should appear in management directives, administrative policies, or operating manuals. Without a clearly defined and documented response to emerging animal diseases, response efforts could be slowed as agency staff and other stakeholders may not be able to quickly identify the appropriate actions to take. The recent outbreaks of SECD have heightened awareness of the need to better prepare for emerging animal diseases. USDA, the states, and the swine industry are making considerable efforts to ensure that, in the future, the response to such diseases will be swift and effective—which can be paramount for preventing or limiting sudden, negative consequences for animal health, economic security, and food security. While much has been accomplished, opportunities remain to improve USDA’s ability to respond to the risks posed by emerging animal diseases. In particular, unless USDA clarifies how it intends to respond to such diseases, stakeholders may not be receptive to USDA leadership and agency staff may not know their options for managing future outbreaks or how to decide among these options. Additionally, USDA currently does not have a process in place that, consistent with standards for internal control, would help ensure its guidance for investigation of foreign or emerging animal diseases is followed. Until USDA develops such a process, it cannot have reasonable assurance that the guidance will be followed in future outbreaks. To improve USDA’s ability to respond to and protect against future emerging animal diseases, we recommend that the Secretary of Agriculture direct the Administrator of the Animal and Plant Health Inspection Service to take the following two actions: Clarify and document how the agency will respond to emerging diseases including defining key aspects of its response, such as roles and responsibilities, potential response strategies, and what may trigger different types of actions. Develop a process to help ensure that its guidance for investigation of foreign or emerging animal diseases is followed, such as a process for documentation of the justification and approval of any deviation from the directions. We provided a draft of this report to USDA for review and comment. USDA provided written comments, which are summarized below and reproduced in appendix II. In its comments, USDA agreed with the intent of our recommendations and described actions or plans to address them. More specifically, to clarify and document how the agency will respond to emerging diseases, USDA noted that its new draft guidance for responding to emerging animal diseases was made available for public comment and stated that it will revise this guidance as needed. USDA also stated that, for each of the goals within this guidance, APHIS is developing further direction to clarify roles and responsibilities, potential responses, and possible triggers. To develop a process to help ensure that its guidance for investigation of foreign or emerging diseases is followed, USDA stated that the intended refinement and expansion of the guidance for responding to emerging animal diseases will address when and how emerging diseases may be investigated differently from the procedures in its current investigation guidance. In our draft, we also included a recommendation to develop a process to address deficiencies identified by USDA’s root cause retrospective study or demonstrate the findings do not warrant management action to reduce the likelihood of entry of future animal diseases into the United States. We have removed this recommendation because, in its written comments, USDA provided new information on actions it has recently taken to address it. Specifically, USDA identified two approaches to mitigate potential risks identified in the study. First, USDA stated that, prior to the release of the study, APHIS consulted with FDA, which has regulatory jurisdiction over feed and feed facilities, to discuss potential regulatory controls under the Food Safety and Modernization Act. In particular, USDA noted a rule FDA enacted in September 2015 to implement provisions of this act that requires registered animal food facilities to develop a food safety plan, perform an analysis on hazards associated with the animal food and the facility, and implement measures to control these hazards. USDA stated that these regulatory controls are believed to address the risks identified in the study related to the entry of animal diseases into the United States. We believe that this is a reasonable assessment of the new regulatory controls. Second, USDA has communicated findings of the study to stakeholders in the feed and swine industry, who, according to USDA, could mitigate risks prior to exposure of animals. USDA provided us with documentation supporting its statement about meeting recently with FDA and industry, and we verified with a senior swine industry official that USDA presented findings of its study to the swine industry prior to its release. In light of these recent activities, we no longer believe that there is a need for a recommendation to develop a process to address deficiencies identified in the root cause analysis report, and we removed it accordingly. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Agriculture, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact John Neumann at (202) 512-3841 or neumannj@gao.gov, or Timothy M. Persons at (202) 512-6412 or personst@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. We identified and reviewed four studies published between March 2013 and August 2015 in peer-reviewed journals that examined factors that may have contributed to the spread of Porcine Epidemic Diarrhea (PED) in the United States. University researchers, swine veterinarians, and pork producers collaborated on these studies. None of the four studies we reviewed found evidence of a definitive cause of PED spread. However, the studies showed that it is plausible for PED viruses to be spread by air, feed, transport vehicles, or people. See table 3 below for a summary of the studies. John Neumann, (202) 512-3841 or neumannj@gao.gov. Timothy M. Persons, (202) 512-6412 or personst@gao.gov. In addition to the individual named above, Mary Denigan-Macauley (Assistant Director), Sushil Sharma (Assistant Director), Leslie Ashton, Kevin Bray, Mark Braza, Allen Chan, Barbara El Osta, Cynthia Norris, Dan Royer, Amber Sinclair, and Elaine Vaurio made key contributions to this report.
Pork is consumed more than any other meat worldwide, and there are numerous other products made with ingredients from pigs, including medical products, such as insulin to treat diabetes. The United States is the world's third-largest producer of pork products. USDA estimated that U.S. pork exports in 2014 were valued at over $6 billion. Two lethal, highly contagious diseases in pigs emerged in the United States in 2013 and 2014, causing the deaths of millions of pigs. The two emerging diseases are collectively known as SECD. GAO was asked to review federal actions to address SECD outbreaks. This report examines (1) the initial response to the SECD outbreaks, (2) USDA's subsequent actions to manage SECD, and (3) steps USDA has taken to improve its future response to emerging animal diseases. GAO analyzed USDA efforts to collect data about the number and location of infected herds; reviewed federal regulations and USDA animal disease response guidance; and interviewed USDA, state, and industry stakeholders involved in the response and control efforts. The U.S. Department of Agriculture (USDA) did not take regulatory action during the initial response to the outbreaks of Swine Enteric Coronavirus Diseases (SECD) beginning in May 2013, when an outbreak was first detected, because the agency did not believe then that such action was necessary. Instead, USDA initially supported swine industry-led efforts. Without regulatory action, such as requiring reporting of infected herds, USDA had limited information about the location of the first infected herds. In addition, USDA officials acknowledged that USDA did not follow its guidance that calls for conducting epidemiological investigations at the onset of outbreaks. As a result, USDA did not conduct timely investigations of the premises with the first infected herds, and the source of disease will likely never be determined. Further, USDA does not have a process to help ensure the guidance is followed. Without such a process, USDA lacks reasonable assurance that the guidance will be followed in the future. In June 2014, amid concerns about the spread of SECD, USDA issued a federal order requiring reporting of newly infected herds. As a result, USDA has more accurate information about the number and location of such herds, and SECD have been confirmed in 28 U.S. states, as shown below. USDA also provided funding to help manage the diseases. To help improve its future response to SECD and other emerging animal diseases—those not known to exist in the United States or which have changed to become a threat—USDA has drafted new guidance. However, it has not defined key aspects of its response such as roles and responsibilities, which according to its strategic plan, are key components of successful collaboration to protect animal health. Without a clearly defined response to such emerging animal diseases, response efforts could be slowed. GAO recommends that USDA develop a process to help ensure its guidance for investigation of animal diseases is followed and clarify and document how it will respond to emerging diseases, including defining roles and responsibilities. USDA generally agreed with GAO's recommendations.
In performing our evaluation of the accuracy and completeness of DOD’s reported inventory of financial management systems, we reviewed DOD guidance related to classifying its systems as financial, mixed financial, and nonfinancial; determined whether systems categorized as nonfinancial contained information needed to produce DOD financial statements and other financial reports; compared DOD’s financial management systems inventory to other DOD systems inventories to determine whether categories of systems not included in financial management systems inventories and reports were properly categorized as nonfinancial (however, we did not test the accuracy of these inventories); interviewed appropriate DOD, OMB, and DFAS staff to obtain information regarding the categorizing and reporting of financial management systems; reviewed federal financial management system guidance and applicable laws, the Joint Financial Management Improvement Program’s (JFMIP) Framework for Federal Financial Management Systems, and OMB Circulars A-123, A-127, and A-130; and reviewed military service audits to identify systems used to prepare financial statements. We performed our work from December 1995 to November 1996 in the Washington, D.C., area in accordance with generally accepted government auditing standards. We requested agency comments from the Secretary of Defense or his designee. The Deputy Chief Financial Officer provided us with written comments that are discussed in the “Agency Comments and Our Evaluation” section of this report and reprinted in appendix II. Legislative and other requirements to which DOD is subject recognize the significance of developing a complete financial management systems inventory. The intent of these requirements, as indicated in the policy statement found in section 6 of OMB Circular A-127, is to ensure that financial management systems provide complete, reliable, and timely information to enable government entities to carry out their fiduciary responsibilities; deter fraud, waste, and abuse; and facilitate efficient and effective delivery of programs through relating financial consequences to program performance. Further details on federal financial management systems requirements can be found in appendix I. The Chief Financial Officers (CFO) Act of 1990 gives agency CFOs the responsibility for developing and maintaining integrated accounting and financial management systems. In addition, the act requires that the agency CFO provide policy guidance and oversight of agency financial management personnel, activities, and operations, including the implementation of agency asset management systems such as those for property and inventory management. OMB implementing guidance states that the CFO is to approve the design for information systems that provide, at least in part, financial and/or program performance data used in financial statements, solely to ensure that CFO needs are met. In addition, CFOs are required to prepare and annually revise agency plans to implement OMB’s 5-year financial management plan for the federal government. Agency 5-year plans are to include information such as the agency’s strategy for developing and integrating agency accounting, financial information, and other financial management systems. A recent congressional initiative in this area is the Federal Financial Management Improvement Act of 1996, which provides a legislative mandate to implement and maintain financial management systems that substantially comply with federal financial management systems requirements, applicable federal accounting standards, and the U.S. Standard General Ledger. The legislative history of the act expressly refers to JFMIP requirements and OMB Circular A-127 as sources of the financial management systems requirements. If the head of an agency determines that the agency’s financial management systems do not comply with the requirements of the act, a remediation plan must be established that includes resources, remedies, and intermediate target dates necessary to bring the agency’s financial management systems into substantial compliance. The act defines financial management systems to include the financial systems and the financial portions of mixed systems necessary to support financial management, including automated and manual processes, procedures, controls, data, hardware, software, and support personnel dedicated to the operation and maintenance of system functions. A mixed system is defined as an information system that supports both financial and nonfinancial functions of the federal government or its components. Additional key financial management systems requirements include the following. JFMIP’s Framework for Federal Financial Management Systems provides a model for the development of an integrated financial management system. Circular A-127 requires that executive agencies develop and maintain an agencywide inventory of financial management systems and ensure that appropriate assessments of these systems are conducted. Circular A-127 applies to financial management systems, which include financial and mixed systems. The circular also requires that agencies establish and maintain a single, integrated financial management system. According to the circular, a single, integrated financial management system refers to a unified set of financial systems and the financial portions of mixed systems encompassing the software, hardware, personnel, processes (manual and automated), procedures, controls, and data necessary to carry out financial management functions, manage financial operations of the agency, and report on the agency’s financial status to central agencies, the Congress, and the public. The Paperwork Reduction Act establishes a broad mandate for agencies to perform their information resources management activities in an efficient, effective, and economical manner. Consistent with the act, Circular A-130 states that the head of each agency shall maintain an inventory of the agency’s major information systems. OMB requires that executive agencies, under section 4 of the Federal Managers’ Financial Integrity Act (FMFIA), produce an annual statement on whether their financial management systems conform with governmentwide principles, standards, and requirements. DFAS maintains a DOD inventory of financial systems in the Systems Inventory Data Base (SID). Using SID, DOD reported 249 systems in its fiscal year 1995 annual financial management systems inventory to OMB. However, this does not include many systems that DOD relies on to produce financial management information and reports. A complete inventory is a critical step in DOD’s efforts to correct its long-standing financial systems deficiencies and develop a reliable, integrated financial management system. These deficiencies have been a major factor contributing to DOD’s inability to fulfill its stewardship responsibilities for its resources, including maintaining control over specific assets, such as shipboard supplies and weapons systems, and over its expenditures, such as payroll and contract payments. In addition, the DOD Inspector General (IG) recently reported that (1) the overarching deficiency that prevented auditors from rendering audit opinions on fiscal year 1995 DOD general fund financial statements was the lack of adequate accounting systems and (2) disclaimers of opinion can most likely be expected until the next century. The number of reported systems has been limited because both DOD regulations and DFAS guidance did not properly define financial management systems, as required. Although we did not identify all of the systems that should have been included, several of the excluded systems account for billions of dollars of DOD assets and are clearly mixed systems that meet the OMB and JFMIP definition of financial management systems. DOD Financial Management Regulation (DOD 7000.14-R, Volume 1) does not include all mixed systems in its definition of financial management systems, as required. Instead, the regulation states that “feeder systems ... are the initial record of financial data for processing by accounting systems” are not within the scope of financial management systems reporting. The regulation provides the following specific examples of feeder systems: (1) logistics and inventory systems that provide acquisition cost, location, and quantity information, (2) personnel systems that provide grade and entitlements information, and (3) timekeeping systems that provide attendance and leave information. DFAS repeats DOD’s limited definition of financial management systems in its annual guidance to Defense components for conducting financial management systems reviews. The feeder systems generally excluded by DOD are typical of systems used to track financial events and are specifically mentioned in the JFMIP Framework document as critical to an integrated financial management system. An integrated system under general ledger control is necessary to provide oversight and control to ensure accurate and complete accounting for DOD’s resources. To be truly effective, DOD’s integrated financial management system must link program delivery to the systems that process and track financial events. This linkage is crucial to support the information needs of management, central agencies, and the Congress. Integrated systems help to provide the overall discipline needed to ensure the accuracy and completeness of the information that is used to support DOD’s stewardship responsibilities for the vast resources entrusted to it. Audit reports have disclosed numerous problems resulting from the lack of an integrated financial management system that directly affect the military services’ ability to achieve mission objectives. For example, in our review of the Department of the Navy’s inventory management, we reported that Navy’s item managers could not keep track of the $5.7 billion in operating materials and supplies on board ships and at 17 redistribution sites. The Atlantic and Pacific Fleets and other Navy components are pursuing separate, nonintegrated systems projects in an attempt to improve visibility and thus management of their operating materials and supplies. In another example, at the end of fiscal year 1995, DOD reported that it had inventory valued at almost $70 billion, and we estimate that about half of the inventory includes items that are not needed to support DOD war reserves or current operating requirements. Since 1990, GAO has designated DOD inventory management a high-risk area, with billions of dollars being wasted on excess supplies. The lack of integrated financial management systems and the lack of accurate reliable data to support the quantity, condition, and value of items have been major contributing factors to DOD’s inability to account for and control its inventory. In addition, we previously reported that thousands of soldiers on Army’s payroll could not be matched with Army personnel records, and Army had no assurance that these individuals should have been paid. In fact, we found that DFAS paid $6.1 million to 2,279 soldiers who should not have been paid. In response to our recommendation that steps be taken to integrate its payroll and personnel systems, DOD stated that neither the DOD CFO nor the DFAS Director alone had sufficient authority to ensure that specific steps were taken toward the integration or interface of payroll and personnel systems. Since financial transactions are initiated in systems such as acquisition, logistics, and personnel, the DOD Comptroller is a stakeholder in them and has oversight responsibilities in accordance with the CFO Act. Furthermore, these systems are covered under the newly enacted Federal Financial Management Improvement Act of 1996. We believe that the Senior Financial Management Oversight Council could appropriately address issues dealing with systems that have multiple stakeholders that cross departmental boundaries. We compared DOD’s inventory of financial management systems to the systems inventories contained in the Defense Information Systems Agency’s (DISA) Defense Integration Support Tools (DIST) database. As of April 1996, DIST contained 8,624 information systems which were segregated by category. DIST labeled 931 of the systems as financial, 682 more than the 249 systems included in the DOD inventory. While DOD officials have indicated that the DIST listing may be incomplete or systems may be incorrectly identified as supporting financial management, the large discrepancy indicates that additional financial management systems likely exist. Most acquisition, personnel, property, and time and attendance systems were not included in the DIST financial systems category. For example, the DIST database did not identify as financial systems the Defense Civilian Personnel Data System, used for civilian personnel management, or the Civil Engineering Material Acquisition System, an inventory management system. These systems were also excluded from the DOD inventory. We performed a limited search on the entire DIST database for the key words acquisition, personnel, property, and time and attendance and identified 282 systems that contained one or more of these words in their system name and therefore appeared to meet the OMB and JFMIP definitions of financial management systems; however, only 43 of those systems were classified in the DIST as financial and only 6 were reported in the DOD financial management systems inventory. Several systems that were not included in DOD’s inventory provide critical information for use in formulating the financial statements of the military services. These systems clearly meet the OMB and JFMIP definition for financial management systems. For example, DOD’s list of 249 financial management systems did not include the following key systems, which account for billions of dollars of DOD assets and were identified in recent financial statement audit reports or other financial reporting. Continuing Balance System - Expanded (CBSX). Army uses CBSX to report the year-end value of retail equipment on hand and in transit for active Army and Army Reserve activities. In its fiscal year 1995 financial statements, Army reported about $82.6 billion of equipment on hand and about $500 million of assets in transit. Reliability and Maintainability Information System (REMIS). REMIS is an Air Force system designed to track inventory, status, and utilization of aircraft, as well as compute their value. For fiscal year 1995, Air Force used REMIS to report on over 9,450 aircraft and 4,500 guided and ballistic missiles valued at $144.6 billion. Support Equipment Resources Management Information System (SERMIS). This system is Navy’s automated source of information on naval aviation support equipment assets currently in use. SERMIS maintains financial and management information on support equipment valued at $5.3 billion in fiscal year 1995. Standard Installation/Division Personnel System (SIDPERS). SIDPERS is the personnel system operated by Army installation and field commanders for active duty personnel. This system is used to report data to the Total Army Personnel Data Base which in turn reports five pay events to DFAS for about 493,000 personnel. Army plans to fully implement a version of SIDPERS within the next 2 years which will interface directly with DFAS and account for 88 pay events. Despite their importance to the payroll process, neither SIDPERS nor the Total Army Personnel Data Base have been identified as financial management systems. A comprehensive inventory of the financial management systems used to record, accumulate, classify, and report DOD’s financial management information is a critical step if DOD is to (1) effectively manage its existing systems, (2) prioritize and coordinate efforts to correct its long-standing financial systems deficiencies, and (3) develop a reliable, integrated financial management system. DOD’s severe systems deficiencies have been a major factor contributing to its inability to meet its stewardship responsibilities for the vast resources entrusted to it. Finally, until a complete inventory of financial management systems is developed, DOD will not be able to fulfill the requirements of the financial management improvement initiatives enacted by the Congress. As part of DOD’s long-term systems improvement strategy, we recommend that you direct that the Under Secretary of Defense (Comptroller) revise the Department of Defense Financial Management Regulation, DOD 7000.14-R, Volume 1, to include all mixed systems in its definition of financial management systems; the Senior Financial Management Oversight Council oversee the development of an inventory of all financial management systems, using the revised definition; and systems identified be incorporated in the DOD Chief Financial Officer Financial Management 5-Year Plan, DFAS Chief Financial Officer Financial Management 5-Year Plan, and FMFIA section 4 reporting. In written comments on a draft of this report, DOD’s Deputy Chief Financial Officer stated that DOD concurred or partially concurred with all of our recommendations. In response to our first recommendation that DOD revise its financial management regulations to include all mixed systems in its definition of financial management systems, DOD stated that it will use the definition provided in OMB Circular A-127 as a base for the revised definition. DOD further stated that it will also include other relevant statutory and regulatory requirements in the revised definition. We want to reiterate our position that the OMB requirements be fully implemented. Since 1984, OMB’s Circular A-127 and all subsequent guidance have included a definition of financial management systems that includes personnel, property, procurement, and inventory. These are the types of systems that DOD has specifically excluded from its reporting. The most recent guidance, issued by OMB in 1993, classifies these types of systems as mixed systems. Also, the recently enacted Federal Financial Management Improvement Act of 1996 uses the same definitions for financial management systems and mixed systems as OMB Circular A-127. We provided OMB officials with a copy of our draft report, and they concurred with the representations of OMB Circular A-127 requirements included in our report. DOD partially concurred with our recommendation that the DOD Senior Financial Management Oversight Council oversee the development of the financial management systems inventory. DOD agreed that oversight was necessary but stated that the Council was not the appropriate body. Rather, DOD indicated that this responsibility will remain with the Chief Financial Officer and DFAS, with assistance from the DOD components. In light of the serious deficiencies in DOD’s financial management, DOD must address its financial management systems problems immediately. In our view, timely resolution of this issue can only be accomplished with the involvement of top-level management throughout the affected components of DOD, such as those responsible for logistics, acquisition, and personnel. We continue to believe that the Council’s oversight, together with participation of the CFO and DFAS, is necessary to ensure that the inventory is completed as soon as possible. We support DOD’s efforts to review the DIST database to determine if any of the systems should be included. For this effort to succeed, DOD must adopt a definition of financial management systems that is consistent with OMB Circular A-127 and the Federal Financial Management Improvement Act. In response to our recommendation that the financial management systems identified be incorporated in the DOD Chief Financial Officer Financial Management 5-Year Plan, DFAS Chief Financial Officer Financial Management 5-Year Plan, and FMFIA section 4 reporting, DOD stated that it has and will continue to report on its financial systems. However, until DOD changes its definition of financial management systems in acccordance with OMB guidance and the provisions of the Federal Financial Management Improvement Act, its reporting will continue to be incomplete. DOD needs to identify all of the systems it relies on to manage its vast resources as a critical step in its efforts to develop reliable financial management systems and resolve its long-standing financial management problems. Although DOD generally concurred with the report’s recommendations, DOD stated that some of the issues addressed in the report were significantly inaccurate. Specifically, DOD took issue with the report in its treatment of three areas. First, DOD asserted that its System Inventory Database provides a comprehensive inventory of financial management systems. Our report recognizes that DOD has an inventory maintained in the Systems Inventory Database. However, our report points out that DOD’s inventory is not a comprehensive inventory of all DOD financial management systems. The report states that the number of reported systems has been limited because both DOD regulations and DFAS guidance did not properly define financial management systems. Most personnel, property, procurement, and inventory systems have been excluded from DOD’s reporting. Our report includes examples of systems not included in DOD’s inventory that account for billions of dollars of DOD assets. These systems meet OMB’s definition of a mixed system and must be included in a comprehensive inventory if DOD is to develop a reliable, integrated financial management system. Second, DOD stated that DIST is not a database that is used for baselining financial systems and that the database does not contain a process or procedure for classifying or certifying systems as financial systems. As stated in the report, although the DIST listing may be incomplete or systems may be incorrectly identified as supporting financial management, the large disparity between the number of systems identified in DIST and SID indicates that additional financial management systems likely exist. Third, DOD stated that it is complying with the provisions of OMB Circular A-127. Our report recognizes that DOD has an established process intended to meet OMB requirements. However, because DOD has not adopted the OMB definition for financial management systems, its inventory and reporting have not been comprehensive. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services, the House Committee on National Security, the Senate Committee on Governmental Affairs, the House Committee on Government Reform and Oversight, and the Director of the Office of Management and Budget. We are also sending copies to the Secretary of Defense and the Under Secretary of Defense (Comptroller). Copies will also be made available to others upon request. Please contact me at (202) 512-9095 if you have any questions on this report. Major contributors to this report are listed in appendix III. The following are the specific requirements for financial management systems to which DOD, as an executive agency, is subject. The CFO Act specifies that the responsibilities of an agency CFO are to include developing and maintaining integrated accounting and financial directing, managing, and providing policy guidance and oversight of all agency financial management personnel, activities, and operations; and approving and managing financial management systems design and enhancement projects. On February 27, 1991, OMB issued guidance (M-91-07) for preparing organization plans required by the CFO Act. The guidance details the authorities, functions, and responsibilities that a CFO is to have to comply with the act. Specifically, the guidance states that the organization plans should provide the CFO with authority to manage directly and/or monitor, evaluate, and approve the design, budget, development, implementation, operation, and enhancement of agencywide and agency component accounting, financial, and asset management systems; to clear the design for other information systems that provide, at least in part, financial and/or program performance data used in financial statements, solely to ensure that CFO needs are met; to ensure that program information systems provide financial and programmatic data (including program performance measures) on a reliable, consistent, and timely basis to agency financial management systems; and to evaluate, where appropriate, the installation and operation of such systems. In addition, the CFO Act requires OMB to prepare annually and submit to the Congress a governmentwide 5-year financial management plan that describes planned OMB and agency activities for the next 5 fiscal years to improve the financial management of the federal government. Further, the act requires agency CFOs to prepare and annually revise agency plans to implement OMB’s 5-year plan. Each 5-year plan is to include information such as the following: a description of the existing financial management structure and any changes needed to establish an integrated financial management system; a strategy for developing and integrating individual agency accounting, financial information, and other financial management systems; and proposals to eliminate duplicate and other unnecessary systems and projects to bring existing systems into compliance with applicable standards and requirements. DOD is subject to section 4 of FMFIA; OMB requires executive agencies under section 4 to produce an annual statement on whether its financial management systems conform with governmentwide principles, standards, and requirements. Governmentwide systems requirements, developed by OMB in consultation with the Comptroller General, are presented in section 7 of OMB Circular A-127, “Financial Management Systems Requirements.” Circular A-127 requires that executive agencies, including DOD, develop and maintain an agencywide inventory of financial management systems and ensure that appropriate assessments are conducted of these systems. In addition, DOD must consider the results of its FMFIA reviews when it develops its financial management systems plans. Requirements for reporting the results of FMFIA section 4 systems assessments are found in OMB Circular A-123, “Management Accountability and Control.” Executive agencies, including DOD, must produce an annual statement on whether the agency’s financial management systems conform with governmentwide requirements found in Circular A-127. If the agency does not conform with these requirements, the statement must discuss the agency’s plans for bringing its systems into compliance. If the agency head judges any financial management systems weakness to be material, the issue must be included in the annual FMFIA report. The FMFIA report is to be transmitted to the President, the President of the Senate, the Speaker of the House of Representatives, the Director of OMB, and key congressional committees and subcommittees. Circular A-127 applies to financial management systems, which include financial and mixed systems. In determining which systems are subject to these requirements, the Circular categorizes and defines information systems in the following manner. A financial system (1) collects, processes, maintains, transmits, and reports data about financial events, (2) supports financial planning or budgeting activities, (3) accumulates and reports cost information, or (4) supports the preparation of financial statements. A mixed system supports both financial and nonfinancial functions. A nonfinancial system supports nonfinancial functions and any financial data included in the system are insignificant to agency financial management and/or not required for the preparation of financial statements. Circular A-127 also requires that DOD establish and maintain a single, integrated financial management system. According to the Circular, a single, integrated financial management system refers to a unified set of financial systems and the financial portions of mixed systems encompassing the software, hardware, personnel, processes (manual and automated), procedures, controls, and data necessary to carry out financial management functions, manage financial operations of the agency, and report on the agency’s financial status to central agencies, the Congress, and the public. Unified means that the systems are planned for and managed together, operated in an integrated fashion, and linked together electronically to provide the agencywide financial system support necessary to carry out the agency’s mission and support the agency’s financial management needs. In addition, Circular A-130 provides governmentwide information resources management policies as required by the Paperwork Reduction Act of 1980, as amended. The Paperwork Reduction Act establishes a broad mandate for agencies to perform their information resources management activities in an efficient, effective, and economical manner. Consistent with the act, Circular A-130 states that the head of each agency shall maintain an inventory of the agency’s major information systems. Developing and maintaining a complete inventory of DOD’s information resources is essential to implementing a strategic information resources management process, as required by the Paperwork Reduction Act and the recently enacted Clinger-Cohen Act of 1996. The Clinger-Cohen Act calls for agency heads, under the supervision of OMB’s Director, to design and implement a process for maximizing the value and assessing and managing the risks of their information technology acquisitions, including establishing minimum criteria on whether to undertake an investment in information systems. This process is to be integrated with the processes for making budget, financial, and program management decisions with the agency. In addition, the act states that the head of each executive agency, in consultation with the Chief Information Officer and the Chief Financial Officer, is responsible for establishing policies and procedures that ensure that the accounting, financial, and asset management systems and other information systems of the agency are designed, developed, maintained, and used effectively to provide financial or program performance data for financial statements. JFMIP’s Framework for Federal Financial Management Systems provides a model for the development of an integrated financial management system. This document points out the importance of financial management systems in the overall effort to improve government. “These systems should not only support the basic accounting functions for accurately recording and reporting financial transactions but must also be the vehicle for integrated budget, financial, and performance information that managers use to make decisions on their programs....Without meaningful financial information and supporting systems, neither the President, the Congress, nor the program managers can effectively carry out their stewardship responsibilities.” According to the Framework, an integrated system includes, among others, the following financial management system types: a core financial system that supports general ledger management, funds management, payment management, receipt management, and cost management; a personnel/payroll system; an inventory system; a property management system; an acquisition system; a budget formulation system; and a managerial cost accounting system. To function as a single, integrated system, the types of systems listed above must have these physical characteristics: common data elements, common transaction processing, consistent internal controls, and efficient transaction entry. The following are GAO’s comments on the Department of Defense’s letter dated January 24, 1997. 1. See the “Agency Comments and Our Evaluation” section of this report. 2. In a follow-up discussion on DOD’s statement that the additional 682 DIST systems have failed to satisfactorily complete the required process and qualify as legitimate financial management systems, DOD officials stated that these systems have not yet undergone the required process. As stated in DOD’s response to our second recommendation, DOD plans to review the DIST database to determine if any of these systems should be included in its SID. 3. In a January 10, 1997, discussion on a draft of this report, DOD officials stated that the Department defines a mixed system as an integrated system that performs both financial and program functions. For example, they told us that the Marine Corps Total Force System meets DOD’s definition because it is an integrated payroll and personnel system that shares the same database for both functions. In our discussions, DOD officials indicated that a personnel system that provided data to a separate payroll system with which it was not physically integrated would not meet its definition of a mixed system and therefore would be excluded from its inventory of financial management systems. However, neither Circular A-127 nor the Federal Financial Management Reform Act uses integration as a criterion in its definition of financial management systems. Lynn Filla Clark, Senior Evaluator Neal Gottlieb, Senior Evaluator Stewart Seman, Senior Evaluator Lenny Moore, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Department of Defense's (DOD) financial management systems, focusing on the accuracy and completeness of DOD's inventory of financial management systems. GAO found that: (1) DOD does not have a comprehensive inventory of the systems it relies on to record, accumulate, classify, and report financial information; (2) the number of systems included in DOD's inventory was limited because the regulations and guidance from the Defense Finance and Accounting Service (DFAS) did not properly define financial management systems; (3) Office of Management and Budget Circular A-127, Joint Financial Management Improvement Program system requirements, and the recently enacted Federal Financial Management Improvement Act of 1996 define financial management systems to include the financial systems and the financial portions of mixed systems necessary to support financial management; (4) a mixed system is defined as an information system that supports both financial and nonfinancial functions of the federal government or its components; (5) DOD considers mixed systems that are generally not within the Chief Financial Officer (Comptroller) organization, such as acquisition, logistics, and personnel systems, to be nonfinancial and, therefore, does not include them in its inventory; and (6) although GAO did not identify all of the systems that should have been included, several of the excluded systems account for billions of dollars of assets and clearly meet the required definition of financial management systems.
VA pays monthly disability compensation benefits to veterans with service-connected disabilities (injuries or diseases incurred or aggravated while on active military duty) according to the severity of the disability. VA also pays compensation to some spouses, children, and parents of deceased veterans and service members. VA’s pension program pays monthly benefits based on financial need to certain wartime veterans or their survivors. When a veteran submits a claim to any of the Veterans Benefits Administration’s 57 regional offices, a veterans service representative is responsible for obtaining the relevant evidence to evaluate the claim. Such evidence includes veterans’ military service records, medical examinations, and treatment records from VA medical facilities and private medical service providers. Once a claim has all the necessary evidence, a rating specialist evaluates the claim and determines whether the claimant is eligible for benefits. If the veteran is eligible for disability compensation, the rating specialist assigns a percentage rating based on degree of disability. A veteran who disagrees with the regional office’s decision can appeal to VA’s Board of Veterans’ Appeals and then to U.S. federal courts. If the Board finds that a case needs additional work such as obtaining additional evidence or contains procedural errors, it is sent back to the Veterans Benefits Administration, which is responsible for initial decisions on disability claims. In November 2003, the Congress established the Veterans’ Disability Benefits Commission to study the appropriateness of VA disability benefits, including disability criteria and benefit levels. The commission is scheduled to report to the Congress by October 1, 2007. VA continues to experience significant service delivery challenges including lengthy processing times and inaccurate and inconsistent decisions. While VA made progress in fiscal years 2002 and 2003 reducing the size and age of its pending claims inventory, it has lost ground since then. This is due in part to increased filing of claims, including those filed by veterans of the Iraq and Afghanistan conflicts. Moreover, questions remain about consistency of VA’s decisions across regional offices and at the Board of Veterans’ Appeals. VA’s inventory of pending claims and their average time pending have increased significantly in the last 3 years. The number of pending claims increased by almost one-half from the end of fiscal year 2003 to the end of fiscal year 2006, from about 254,000 to about 378,000. During the same period, the number of claims pending longer than 6 months increased by more than three-fourths, from about 47,000 to about 83,000 (see fig.1). Similarly, as shown in figure 2, VA reduced the average age of its pending claims from 182 days at the end of fiscal year 2001 to 111 days at the end of fiscal year 2003. However, by the end of fiscal year 2006 average days pending had increased to 127 days. Meanwhile, the time required to resolve appeals remains too long. The average time to resolve an appeal rose from 529 days in fiscal year 2004 to 657 days in fiscal year 2006. The increase in VA’s inventory of pending claims, and their average time pending is due in part to an increase in claims receipts. Rating-related claims, including those filed by veterans of the Iraq and Afghanistan conflicts, increased steadily from about 579,000 in fiscal year 2000 to about 806,000 in fiscal year 2006, an increase of about 39 percent. In addition to problems with deciding claims in a timely manner, VA acknowledges that regional office decision accuracy needs further improvement. VA reports that it has improved the accuracy of decisions on rating related compensation claims from 80 percent in fiscal year 2002 to 88 percent in fiscal year 2006. However, this figure remains well short of its strategic goal of 98 percent. VA also continues to face questions about its ability to ensure that veterans receive consistent decisions across regional offices. We have identified the need for VA to systematically address this issue to achieve acceptable levels of variation. VA’s Inspector General has studied one indicator of possible inconsistency, the wide variations in average payments per veteran from state to state. In May 2005, the Inspector General reported that compensation payments are affected by many factors and that some disabilities are inherently more susceptible to variations in rating determinations. Further, we reported in May 2005 that the Board of Veterans’ Appeals had taken actions to strengthen its system for reviewing the quality of its decisions, but VA still lacked a systematic method for ensuring the consistency of decision making within VA as a whole. VA has recently taken several steps to improve service delivery, but their potential to lead to significant improvements may be limited by several factors. These steps include requesting funding for additional staff, initiatives to reduce appeal remands, and initiatives to assess and monitor decision consistency. However, limitations on potential improvements include increases in claims volume and complexity, and challenges in acquiring needed evidence in a timely manner. In its fiscal year 2008 budget justification, VA identified an increase in claims processing staff as essential to reducing the pending claims inventory and improving timeliness. According to VA, with a workforce that is sufficiently large and correctly balanced, it can successfully meet the veterans’ needs while ensuring good stewardship of taxpayer funds. The fiscal year 2008 request would fund 8,320 full-time equivalent employees working on compensation and pension, which would represent an increase of about 6 percent over fiscal year 2006. In addition, the budget justification cites near-term initiatives to increase the number of claims completed, such as using retired VA employees to provide training, and the increased use of overtime. Even as staffing levels increase, however, VA acknowledges that it still must take other actions to improve productivity. VA’s budget justification provides information on actual and planned productivity, in terms of claims decided per full-time equivalent employee. While VA expects a temporary decline in productivity as new staff are trained and become more experienced, it expects productivity to increase in the longer term. Also, VA has identified additional initiatives to help improve productivity. For example, VA plans to pilot paperless Benefits Delivery at Discharge, where service members’ disability claim applications, service medical records, and other evidence would be captured electronically prior to discharge. VA expects that this new process will reduce the time needed to obtain the evidence needed to decide claims. To resolve appeals faster, VA has been working to reduce the number of appeals sent back by the Board of Veterans’ Appeals for further work such as obtaining additional evidence and correcting procedural errors. To do so, VA has established joint training and information sharing between field staff and the Board. VA reports that it has reduced the percentage of decisions remanded from about 57 percent in fiscal year 2004 to about 32 percent in fiscal year 2006, and expects its efforts to lead to further reductions. Also, VA reports that it has improved the productivity of the Board’s judges from an average of 604 appeals decided in fiscal year 2003 to 698 in fiscal year 2006. The Board attributes this improvement to training and mentoring programs and expects productivity to improve to 752 decisions in fiscal year 2008. To improve decision consistency, VA has contracted for a study of the major influences on compensation payments, to develop baseline data for monitoring and managing decision variances. Also, VA is in the process of testing templates for compensation and pension medical examinations for specific types of disabilities to ensure that medical evidence from these examinations will enable consistent evaluations of disabilities. Further, VA formed a workgroup to study variances in the rates of benefit grants and denials, and in assigned disability evaluations, leading to development of plans to monitor consistency on an ongoing basis. Despite these efforts, VA may be limited in its ability to make and sustain significant claims processing performance improvements. Recent history has shown that VA’s claims processing workload and performance are affected by several factors, including the impacts of laws and court decisions, increasing numbers and complexity of claims, and difficulties in obtaining accurate and timely information to adjudicate claims. Since 1999, several court decisions and laws related to VA’s responsibilities to assist veterans in developing their benefit claims have significantly affected VA’s ability to process claims in a timely manner. VA attributes some of the increase in the number of claims pending and the average days pending to a September 2003 court decision that required over 62,000 claims to be deferred, many for 90 days or longer. Also, VA notes that legislation and VA regulations have expanded benefit entitlement and added to the volume of claims. For example, in recent years, laws and regulations have created new presumptions of service-connected disabilities for many Vietnam veterans and former prisoners of war. Also, VA expects additional claims receipts based on the enactment of legislation allowing certain military retirees to receive both military retirement pay and VA disability compensation. In addition, rating-related claims continue to increase, from about 579,000 in fiscal year 2000 to about 806,000 in fiscal year 2006, an increase of about 39 percent. While VA projects relatively flat claim receipts in fiscal years 2007 and 2008, it cautions that ongoing hostilities in Iraq and Afghanistan, and the Global War on Terrorism in general, may increase the workload beyond current levels. VA has also noted that claims have increased in part because older veterans are filing disability claims for the first time. Moreover, according to VA, the complexity of claims is also increasing. For example, some veterans are citing more disabilities in their claims than in the past. Because each disability needs to be evaluated separately, these claims can take longer to complete. Additionally, VA notes that they are receiving more disability claims, such as those related to mental health issues including post-traumatic stress disorder, which are generally harder to evaluate. Additionally, claims processing timeliness and decisional accuracy can be hampered if VA cannot obtain the evidence it needs in a timely manner. For example, to obtain information needed to fully develop some post- traumatic stress disorder claims, VBA must obtain records from the U.S. Army and Joint Services Records Research Center (JSRRC), whose average response time to VBA regional office requests is about 1 year. This can significantly increase the time it takes to decide a claim. In December 2006, we recommended that VBA assess whether it could systematically utilize an electronic library of historical military records rather than submitting all research requests to the JSRRC. VBA agreed to determine the feasibility of regional offices using an alternative resource prior to sending some requests to the JSRRC. We also reported that while VBA quality reviewers found few decision errors due to failure to obtain military service records, VBA does not know the extent to which the information that is provided to regional offices is reliable and accurate. Regional offices rely on a VBA unit at the National Personnel Records Center, where service records of many veterans are stored, to do thorough and reliable searches and analyses of records and provide accurate reports on the results. However, we noted that VBA does not systematically evaluate the quality of these searches and analyses. Incomplete and inaccurate reports could affect decisional accuracy. While VA is taking actions to address its claims processing challenges, there are opportunities for more fundamental reform that could dramatically improve decision making and processing. These include reexamining program design, as well as the structure and division of labor among field offices. After more than a decade of research, we have determined that federal disability programs are in urgent need of attention and transformation and placed modernizing federal disability programs on our high-risk list in January 2003. Specifically, our research showed that the disability programs administered by VA and the Social Security Administration lagged behind the scientific advances and economic and social changes that have redefined the relationship between impairments and work. For example, advances in medicine and technology have reduced the severity of some medical conditions and have allowed individuals to live with greater independence and function in work settings. Moreover, the nature of work has changed in recent decades as the national economy has moved away from manufacturing-based jobs to service- and knowledge- based employment. Yet VA’s and SSA’s disability programs remain mired in concepts from the past—particularly the concept that impairment equates to an inability to work—and as such, we found that these programs are poorly positioned to provide meaningful and timely support for Americans with disabilities. In August 2002, we recommended that VA use its annual performance plan to delineate strategies for and progress in periodically updating labor market data used in its disability determination process. We also recommended that VA study and report to the Congress on the effects that a comprehensive consideration of medical treatment and assistive technologies would have on its disability programs’ eligibility criteria and benefits package. This study would include estimates of the effects on the size, cost, and management of VA’s disability programs and other relevant VA programs and would identify any legislative actions needed to initiate and fund such changes. Another area of program design that could be examined is the option of providing a lump sum payment in lieu of monthly disability compensation. In 1996, the Veterans’ Claims Adjudication Commission noted that most disability compensation claims are repeat claims—such as claims for increased disability percentage—and most repeat claims were from veterans with less severe disabilities. According to VA, about 65 percent of veterans who began receiving disability compensation in fiscal year 2003 had disabilities rated 30 percent or less. The commission questioned whether concentrating claims processing resources on these claims, rather than on claims by more severely disabled veterans, was consistent with program intent. The commission asked Congress to consider paying less severely disabled veterans compensation in a lump sum. According to the commission, the lump sum option could have a number of benefits for VA as well as veterans. Specifically, the lump sum option could reduce the number of claims submitted and allow VA to process claims more quickly—especially those of more seriously disabled veterans. Moreover, a lump sum option could be more useful to some veterans as they make the transition from military to civilian life. In December 2000, we reported that about one-third of newly compensated veterans could be interested in a lump sum option. In addition to program design, VA’s regional office claims processing structure may be disadvantageous to efficient operations. VBA and others who have studied claims processing have suggested that consolidating claims processing into fewer regional offices could help improve claims processing efficiency, save overhead costs, and improve decisional accuracy and consistency. We noted in December 2005 that VA had made piecemeal changes to its claims processing field structure. VA consolidated some of its pension income and eligibility verifications at three regional offices. Further, VA consolidated decision making on Benefits Delivery at Discharge claims, which are generally original claims for disability compensation, at the Salt Lake City and Winston-Salem regional offices. However, VA has not changed its basic field structure for processing compensation and pension claims at 57 regional offices, which experience large performance variations and questions about decision consistency. Unless more comprehensive and strategic changes are made to its field structure, VBA is likely to miss opportunities to substantially improve productivity, accuracy, and consistency, especially in the face of future workload increases. We have recommended that the VA undertake a comprehensive review of its field structure for processing disability compensation and pension claims. While reexamining claims processing challenges may be daunting, there are mechanisms for undertaking such an effort, including the congressionally chartered commission currently studying veterans’ disability benefits. In November 2003, the Congress established the Veterans’ Disability Benefits Commission to study the appropriateness of VA disability benefits, including disability criteria and benefit levels. The commission is to examine and provide recommendations on (1) the appropriateness of the benefits, (2) the appropriateness of the benefit amounts, and (3) the appropriate standard or standards for determining whether a disability or death of a veteran should be compensated. The commission held its first public hearing in May 2005 and in October 2005, established 31 research questions for study. These questions address such issues as how well disability benefits meet the congressional intent of replacing average impairment in earnings capacity, whether lump sum payments should be made for certain disabilities or level of severity of disability, and how VA’s claims processing operation compares to other disability programs, including the location and number of processing centers. These issues and others have been raised by previous studies of VBA’s disability claims process. The commission is scheduled to report to the Congress by October 1, 2007. Mr. Chairman, this concludes my remarks. I would be happy to answer any questions that you or other members of the committee may have. For further information, please contact Daniel Bertoni at (202) 512-7215 or Bertonid@gao.gov. Also contributing to this statement were Shelia Drake, Martin Scire, Greg Whitney, and Charles Willson. High Risk Series: An Update. GAO-07-310. Washington, D.C.: January 31, 2007. Veterans’ Disability Benefits: VA Can Improve Its Procedures for Obtaining Military Service Records. GAO-07-98. Washington, D.C.: December 12, 2006. Veterans’ Benefits: Further Changes in VBA’s Field Office Structure Could Help Improve Disability Claims Processing. GAO-06-149. Washington, D.C.: December 9, 2005. Veterans’ Disability Benefits: Claims Processing Challenges and Opportunities for Improvements. GAO-06-283T. Washington, D.C.: December 7, 2005. Veterans’ Disability Benefits: Improved Transparency Needed to Facilitate Oversight of VBA’s Compensation and Pension Staffing Levels. GAO-06- 225T. Washington, D.C.: November 3, 2005. VA Benefits: Other Programs May Provide Lessons for Improving Individual Unemployability Assessments. GAO-06-207T. Washington, D.C.: October 27, 2005. Veterans’ Disability Benefits: Claims Processing Problems Persist and Major Performance Improvements May Be Difficult. GAO-05-749T. Washington, DC.: May 26, 2005. VA Disability Benefits: Board of Veterans’ Appeals Has Made Improvements in Quality Assurance, but Challenges Remain for VA in Assuring Consistency. GAO-05-655T. Washington, D.C.: May 5, 2005. Veterans Benefits: VA Needs Plan for Assessing Consistency of Decisions. GAO-05-99. Washington, D.C.: November 19, 2004. Veterans’ Benefits: More Transparency Needed to Improve Oversight of VBA’s Compensation and Pension Staffing Levels. GAO-05-47. Washington, D.C.: November 15, 2004. Veterans’ Benefits: Improvements Needed in the Reporting and Use of Data on the Accuracy of Disability Claims Decisions. GAO-03-1045. Washington, D.C.: September 30, 2003. Department of Veterans Affairs: Key Management Challenges in Health and Disability Programs. GAO-03-756T. Washington, D.C.: May 8, 2003. Veterans Benefits Administration: Better Collection and Analysis of Attrition Data Needed to Enhance Workforce Planning. GAO-03-491. Washington, D.C.: April 28, 2003. Veterans’ Benefits: Claims Processing Timeliness Performance Measures Could Be Improved. GAO-03-282. Washington, D.C.: December 19, 2002. Veterans’ Benefits: Quality Assurance for Disability Claims and Appeals Processing Can Be Further Improved. GAO-02-806. Washington, D.C.: August 16, 2002. Veterans’ Benefits: VBA’s Efforts to Implement the Veterans Claims Assistance Act Need Further Monitoring. GAO-02-412. Washington, D.C.: July 1, 2002. Veterans’ Benefits: Despite Recent Improvements, Meeting Claims Processing Goals Will Be Challenging. GAO-02-645T. Washington, D.C.: April 26, 2002. Veterans Benefits Administration: Problems and Challenges Facing Disability Claims Processing. GAO/T-HEHS/AIMD-00-146. Washington, D.C.: May 18, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Senate Veterans' Affairs Committee asked GAO to discuss its recent work related to the Department of Veterans Affairs' (VA) disability claims and appeals processing. GAO has reported and testified on this subject on numerous occasions. GAO's work has addressed VA's efforts to improve the timeliness and accuracy of decisions on claims and appeals, VA's efforts to reduce backlogs, and concerns about decisional consistency. VA continues to face challenges in improving service delivery to veterans, specifically in speeding up the process of adjudication and appeal, reducing the existing backlog of claims, and improving the accuracy and consistency of decisions. For example, as of the end of fiscal year 2006, rating-related compensation claims were pending an average of 127 days, 16 days more than at the end of fiscal year 2003. During the same period, the inventory of rating-related claims grew by almost half, due in part to increased filing of claims, including those filed by veterans of the Iraq and Afghanistan conflicts. Meanwhile, appeals resolution remains a lengthy process, taking an average of 657 days in fiscal year 2006. Further, we and VA's Inspector General have identified concerns about the consistency of decisions by VA's regional offices and the Board of Veterans' Appeals (BVA). VA is taking steps to address these problems. For example, the President's fiscal year 2008 budget requests an increase of over 450 full-time equivalent employees to process compensation claims. VA is working to improve appeals timeliness by reducing appeals remanded for further work. VA is also developing a plan to monitor consistency across regional offices. However, several factors may limit VA's ability to make and sustain significant improvements in its claims processing performance, including the potential impacts of laws and court decisions, continued increases in the number and complexity of claims being filed, and difficulties in obtaining the evidence needed to decide claims in a timely and accurate manner, such as military service records. Opportunities for significant performance improvement may lie in more fundamental reform of VA's disability compensation program. This could include reexamining program design such as updating the disability criteria to reflect the current state of science, medicine, technology, and labor market conditions. It could also include examining the structure and division of labor among field offices.
Federal agencies, including DOD, are responsible for ensuring that they use appropriated funds only for purposes, and within the amounts, authorized by the Congress. DOD Directive 7200.1, May 4, 1995, states the policy that DOD organizations are to establish positive control of, and maintain adequate systems of accounting for, appropriations and other funds. The Directive also states that financial management systems are to provide a capability for DOD officials to be assured of the availability of funds before incurring an obligation or making a payment. To comply with legal and regulatory requirements, DOD organizations’ accounting and fund control systems must be able to accurately record disbursements as expenditures of appropriations and as reductions of previously recorded obligations. Proper matching of disbursements with related obligations ensures that the agency has reliable information on the amount of funds available for obligation and expenditure. Problem disbursements occur when (1) the wrong appropriation account or customer is charged when a payment is made, (2) information on an obligation, payment, or collection transaction is inaccurately or incompletely processed, or (3) a contractor is paid too much. In October 1994, we reported that DOD’s records included at least $24.8 billion of such problem disbursements as of June 30, 1994, and that long-standing systemic control weaknesses were keeping DOD from solving its disbursement process problems. We also pointed out that persistent management emphasis was essential to resolving the problem. Specifically, we recommended that DOD management undertake long-term efforts, such as correcting system weaknesses involving the contract payment and accounting systems, and pursue short-term efforts to improve the quality of information in its systems. These short-term actions could be as simple as complying with existing guidance and procedural requirements for (1) recording obligations prior to making contract payments, (2) detecting and correcting errors in the disbursement process, and (3) posting accurate and complete accounting information in systems that support the disbursement processes. We also previously reported that since we did not audit the $24.8 billion problem disbursement figure, DOD’s total problem disbursements could be greater. Acting on our recommendations, DOD subsequently determined that its records contained at least $37.8 billion of problem disbursements as of June 30, 1994. As of January 31, 1996, DOD reported that it had reduced the $37.8 billion of problem disbursement balances to $25.4 billion. Also concerned about DOD’s problem disbursements, the Congress passed section 8137 of Public Law 103-335, to improve accountability over DOD disbursements. The law directed the Secretary of Defense to require that each disbursement in excess of $5 million be matched to a particular obligation before the disbursement is made. This requirement had to be implemented by July 1, 1995. The legislation further required that the Secretary of Defense lower the dollar threshold for matching disbursements and obligations to $1 million no later than October 1, 1995. Subsequently, section 8102 of Public Law 104-61, the Department of Defense Appropriations Act, 1996, superseded the earlier legislation and eliminated the requirement that the threshold be lowered to $1 million. However, section 8102(d), like section 8137(e) of the earlier legislation, provided that the Secretary of Defense could establish a threshold lower than the statutory threshold. In addition, the legislation directed the Secretary to ensure that a disbursement in excess of the threshold amounts not be divided into multiple disbursements to avoid prematching requirements. It also required (1) DOD to develop and submit an implementation plan to the Congress and (2) the DOD Inspector General to review the plan and submit an independent assessment to the congressional defense committees. On February 28, 1995, DOD submitted its plan—which was a general overview plan describing processes and milestones for automating the prevalidation process and lowering the prevalidation threshold to $1 million—to the Congress, and the DOD IG provided the defense congressional committees with its independent assessment, which generally agreed with the plan and DOD’s overall approach for implementation. Our objectives were to (1) assess DOD’s progress in reducing problem disbursements and (2) review DOD’s implementation of the requirement in section 8137 of Public Law 103-335 and section 8102 of Public Law 104-61 that DOD match disbursements over $5 million with obligations in the official accounting records prior to making payments. This review was a joint effort between the DOD IG and GAO. The DOD IG was generally responsible for completing the field work at Army and Navy activities and supporting locations while GAO was generally responsible for completing the field work at Air Force and Marine Corps activities and supporting locations. We combined our efforts to complete work at other DOD locations visited during the review. In conducting our review, we focused primarily on the DFAS Columbus Center because it is DOD’s largest contract paying activity. For example, during fiscal year 1995, DOD paid contractors and vendors $160 billion. Of this amount, $61 billion, or 38 percent, was paid by DFAS Columbus. We conducted our review between June 1995 and April 1996 in accordance with generally accepted government auditing standards. Appendix I contains further details of our scope and methodology. We requested comments from the Secretary of Defense or his designee. On May 23, 1996, officials of the Office of the Secretary of Defense (Comptroller) and DFAS, who are responsible for DOD disbursements, provided us with oral comments. Their comments have been incorporated where appropriate and are discussed in the “Agency Comments” section. Using the June 1994 problem disbursement balance of $37.8 billion as a baseline, DOD began to report reductions in problem disbursement balances, reaching a low in September 1995 of $23.1 billion. Between September 1995 and January 1996, DOD’s reported problem disbursement balances fluctuated between $23.1 billion and $26.1 billion as shown in table 1. According to the leader of the DOD team established to address problem disbursements, the problem disbursements have increased since September 1995 because the inflow of new problem balances continues to offset any gains made by correcting existing balances. As table 2 shows, the inflow of problem disbursements between October 1995 and January 1996 eclipsed the value of problem disbursements that were resolved by $2.3 billion. Although DOD did not have data readily available to show how much of the $21.8 billion of the new problem disbursements was caused by DFAS Columbus, DOD officials acknowledged that tens of thousands of transactions, totaling billions of dollars, were attributable to disbursements made by the Columbus Center. The team leader also told us that the inflow of new problem disbursements has not slowed down because the same long-standing weaknesses regarding system problems and failure to comply with basic accounting procedures, which we previously reported in 1994, generally still exist. For example, he stated that the lack of integrated accounting and disbursing systems was one of the primary causes of disbursement problems. The lack of integrated systems resulted in data entry errors because the same data had to be manually entered into two or more systems. The DOD IG also pointed out in an August 1995 report that Army and Air Force accounting personnel were not complying with accounting regulations and procedures for documenting, validating, reconciling, and reporting transactions that affect obligations. For example, the IG noted that (1) accounting personnel were arbitrarily posting payments to any available unliquidated contract obligation and (2) much of the disbursement information received from the DFAS Columbus Center was not accurate and did not include sufficient information to record payments. The IG noted that such failures to comply with accounting policies and procedures resulted in disbursement problems that, in turn, prevented auditors from rendering audit opinions, other than disclaimers, on the Army’s and Air Force’s financial statements. The DOD team leader also told us that DOD is starting to have difficulties in reducing the older problem disbursement balances already included in its accounting records. For example, between October 1995 and January 1996, DOD reports showed that problem disbursements over 180 days old had increased from $12.9 billion to $14.1 billion. According to the team leader, over time, DOD activities have selected the easier problem disbursement transactions for review. Consequently, the remaining older, unresolved problem disbursements balances represent some of the more difficult balances to reconcile. We are currently reviewing DOD problem disbursements to identify the specific root causes for problematic transactions. Fundamental accounting controls require that the proper funds available for a payment are identified before the payment is made. Prevalidating disbursements to obligations helps to ensure that this is done, but DOD has not followed this basic accounting procedure. To help ensure implementation of this control feature, the Congress has included in DOD’s appropriation acts for the past 2 fiscal years a requirement that DOD prematch disbursements exceeding $5 million with obligations in the official accounting records. The prevalidation process has demonstrated that it is a useful tool to help identify and prevent errors from being recorded in the official accounting records. However, as discussed earlier, to prevent errors from occurring in the first place, DOD must address short-term and long-term efforts targeted at improving the quality of information in its systems. The cornerstone of DOD’s long-term effort is its ongoing development of the Standard Procurement System (SPS) and the Defense Procurement Payment System (DPPS). However, DOD estimates that these systems will not be fully operational until at least the year 2001. In discussing this with DOD officials, they said that in the interim, DOD will concurrently pursue various short-term efforts to improve the quality of information on the amount of funds obligated and disbursed. For example, DOD officials stated that they are in the process of implementing automated interfaces between the contract writing, disbursing, and accounting systems to eliminate data errors generated during the manual entry of data. DOD officials stated that they plan to begin implementing the electronic exchange of data by the end of calendar year 1996. DOD had automated prevalidation to electronically process certain disbursement data between the DFAS Columbus Center’s disbursing system, known as the Mechanization of Contract Administration Services (MOCAS), and eight DOD primary contract accounting systems. As of January 1996, 56 DOD locations were using the eight contract accounting systems to prevalidate disbursements with MOCAS. Consistent with the authority contained in section 8137(e) of Public Law 103-335 and section 8102(d) of Public Law 104-61, DOD required all activities, except the DFAS Columbus Center, to lower the prevalidation threshold from $5 million to $1 million, on October 1, 1995. The disbursement process starts when a contractor submits an invoice or other formal request for payment to a disbursing office. Prior to starting the prevalidation process, the disbursing office is required to determine if the contractor is entitled to the payment. To do this, the disbursing office must ensure that the (1) payments are made only for goods and services authorized by purchase orders, contracts, or other authorizing documents, (2) government received and accepted the goods and services, and (3) payment amounts are accurately computed. They are also responsible for ensuring that accounting data on payment supporting documents are complete and accurate. After determining that the contractor is (1) entitled to the payment and (2) the accounting data are complete and accurate, the disbursing office initiates action to prevalidate the payment by matching the disbursement with an obligation in the official accounting record. These procedures, as described below, are followed for both the automated and manual prevalidating of disbursements. For the automated process, information needed to prevalidate a disbursement is electronically sent from the disbursing system to the funding station’s accounting system. For the manual process, information is exchanged through the use of telephones, fax machines, and mail. First, the disbursing activity provides the accountable station, or stations if the payment is for services or supplies related to two or more DOD activities, with data showing how much it plans to pay and how the payment is to be charged to the obligations in the accountable station(s) records. The accountable station compares this data with its obligations and sends back a notice to the disbursing activity either authorizing or rejecting the payment. If the payment is authorized, the accountable activity is to reserve an amount of unliquidated obligations to cover the amount of payment. After receiving authorization to make a payment, the disbursing activity will make the payment and notify the accountable station that the payment has been made. Several days later, the disbursing activity formally reports to the accountable station on the payment. This final report is currently not part of the automated process on prevalidating disbursements. Figures 1 and 2 illustrate the additional role played by the accounting station when disbursements are prevalidated. Our review disclosed that DOD generally had successfully implemented the automated prevalidation process. However, we and DOD’s IG did find deficiencies in the DFAS automated programs used to prevalidate disbursements related to Army and Air Force funds that could result in material weaknesses which would undermine the intent of prevalidation if not promptly corrected. The most significant weakness was the lack of controls to ensure that Air Force and Army obligations could not be used to cover more than one payment. For example, the Air Force’s Central Procurement Accounting System (CPAS) did not maintain the reservation of funds until the final payment data were received from MOCAS. As a result, the same obligation balances could be used to prevalidate more than one disbursement. Our review of about $66 million of over $1.4 billion problem disbursement balances at one DOD location that operated CPAS found a $3.4 million payment that had been prevalidated but could not be recorded in CPAS once the payment was made. Our analysis disclosed that another $107,000 payment had also been processed and recorded against the same $3.4 million of CPAS obligation balances. Because the $107,000 payment reduced the available obligation balance below the $3.4 million necessary to record the initial prevalidated payment, there were not sufficient obligations in the CPAS accounting system to cover the $3.4 million prevalidated payment. DFAS officials agreed with our analysis and were still reviewing the two payment transactions to determine causes of the problem and necessary corrective actions. We met with DFAS headquarters’ officials to discuss the problems both we and the DOD IG found during our review of the automated programs. The officials agreed that these were serious problems and have taken actions or plan to take actions to correct the identified problems. For example, DFAS has approved a system change request to resolve the problems we identified with CPAS and told us that it should be corrected by June 1996. However, the DFAS officials could not tell us when this problem would be resolved for the Army. The DOD IG has made specific recommendations to address these problems in its report on the prevalidation program. Although section 8102 of DOD’s Appropriations Act for Fiscal Year 1996 required DOD to prevalidate only disbursements in excess of $5 million, on October 1, 1995, DOD lowered the prevalidation threshold to $1 million at all activities except the DFAS Columbus Center. DFAS officials told us that the threshold was not lowered to $1 million at the DFAS Columbus Center because of concerns that the Columbus Center could not absorb the increase in the volume of payments that would have to be prevalidated at the $1 million level. For example, they estimated that the number of invoices they would have to prevalidate annually would increase from about 1,800 at the $5 million level to about 11,200 at the $1 million level. The $1 million threshold level would still only cover about 50 percent of the dollar value of payments at DFAS Columbus. According to the officials, since the DFAS Columbus Center administers some of the most complex contracts in DOD, it requires more time to process and prevalidate payments than it does at the other DOD activities which have much simpler contracts. DFAS officials told us that it is not uncommon for a voucher examiner at the DFAS Columbus Center to allocate a payment across 30 or 40 appropriation fund cites in order to record the payment. Conversely, other DOD activities generally only have to allocate a payment against one or two appropriation fund cites. Our analysis of about 1,400 disbursements prevalidated at the DFAS Columbus Center confirmed what the officials told us about the complexity of processing and prevalidating payments. We found hundreds of payments that were spread across multiple appropriation fund cites ranging from two to over 100 appropriation fund cites. For example, one $6 million payment had been spread across 107 appropriation fund cites, all of which had to be approved before payment could be made. However, since prevalidation at DFAS Columbus is made only for payments exceeding $5 million, large numbers of transactions, amounting to tens of billions of dollars, are excluded from this important accounting control. Our review of the DFAS Columbus Center’s disbursement data between July 1, 1995, and January 31, 1996, disclosed that the Columbus Center made 521,262 disbursements totaling $37.1 billion. Of these, only 1,157 disbursements totaling $12.3 billion were prevalidated. This is less than one-fourth of one percent of the total payments and only about one-third of the total dollars. Our analysis of calendar year 1995 disbursement data disclosed that the DFAS Columbus Center paid about 1.2 million invoices totaling at least $55 billion. As shown in table 3, if DFAS Columbus had been prevalidating disbursements for the entire year, only about 1,800 payments totaling $15.1 billion would have been subject to prevalidation at the $5 million level. DFAS Columbus officials acknowledged that they were not prevalidating many payments by doing only those above the $5 million level and that errors were still occurring at levels below that threshold. The officials acknowledged that lowering the threshold would help prevent additional errors from being passed on to the accountable stations. Although, the DFAS Columbus Center had planned to lower the threshold to $4 million on February 26, 1996, the DOD Comptroller directed the Center not to lower the threshold. In discussing this matter with the DFAS Director, he informed us that DOD was in compliance with the prevalidation legislation and that DOD made a policy decision to keep the $5 million threshold at DFAS Columbus. He noted, however, that one factor considered when deciding not to lower the threshold was that DFAS Columbus was not currently meeting DOD’s payment performance goals for progress payments and cost vouchers. For example, as of December 1995, DFAS Columbus reported that it was taking an average of 16 days to pay a progress payment and 15 days to pay a cost voucher. He said that when DFAS Columbus reduces the overall number of days it takes to pay progress payments and cost vouchers, DOD would consider lowering the threshold. However, he told us that DOD did not have a plan that specified the exact payment period the Columbus Center needed to reach before the prevalidation threshold could be lowered. In discussing a draft of this report with DOD officials, they agreed that they should begin reducing the threshold at the DFAS Columbus Center. They stated that they will start by reducing the threshold to $4 million but had not yet decided when this would take place. They also stated that they intend to develop a plan to continuously lower the threshold. Our review of the prevalidation process at DFAS Columbus showed that prevalidation did add time to the overall payment process. For example, we found that under the best of circumstances, when no errors or rejections occurred, prevalidation took about 3 days. Our analysis of 586 DFAS Columbus payments (progress and cost vouchers which were prevalidated as of March 1996) showed that, when errors and rejections are included, prevalidation took an average of 5 to 6 days overall. DOD could not provide comparable data, as of December 1995, for transactions before prevalidation for us to determine whether DOD was taking longer to pay an invoice as a result of prevalidation or if payment delays were due to problems other than those that occurred during the prevalidation process. However, DFAS Columbus reports on payments overall show that, between September 1995 and February 1996, it had reduced the payment period for progress payments from about 14 days to about 11 days and for cost vouchers from 17 to about 16 days. In addition, our analysis of DFAS Columbus payment data disclosed that as of May 1, 1996, there were only four invoices, totaling $46 million, ranging from about 30 days to 118 days old that had either been rejected or were awaiting further confirmation from the accounting station and lowering the threshold to $4 million would result in the prevalidation of only 557 more payments annually—or about two additional invoices a day—totaling $1.5 billion. Columbus officials told us that with the recent automation of the prevalidation process, they believe that they could now handle the workload at the $4 million threshold level. According to the officials, they had reassigned 25 people in February 1996 to work on the prevalidation program at the Columbus Center to assist with (1) managing the program, (2) reconciling, researching, tracking, and following up on rejected transactions, and (3) reporting to DFAS headquarters on program results. We agree that Columbus could handle the additional workload at the $4 million level. However, as previously shown in table 3, this would only increase the percentage of the dollar amount of disbursements that are prevalidated from 27 percent to 30 percent. The prevalidation program allowed DOD to identify errors and prevent problem disbursements from being recorded in DOD’s official accounting records. However, unless the $5 million threshold is lowered at DFAS Columbus, and the $1 million threshold is lowered at the other payment centers, tens of billions of dollars in transactions will continue to bypass this important control. Until a detailed plan is developed to ensure that all payments are properly prevalidated before taxpayer funds are disbursed, the full benefits of prevalidation will not be realized. More importantly, even at its best, prevalidation will not solve Defense’s disbursement problems as evidenced by $21.8 billion of new problem disbursements that surfaced between October 1995 and January 1996. Because prevalidation is an effort to impose quality near the end of the disbursement process, it does not address the root problems inherent in poor systems and processes as well as failure to follow fundamental internal controls. DOD’s problems with accounting for and reporting on disbursements will not be resolved until (1) weaknesses in control procedures that allow problem disbursements to occur are corrected and (2) improvements are made to DOD’s contract pay, disbursing, and accounting processes and systems. Prevalidating all disbursements is important, especially in the short term, to protect the integrity of DOD’s disbursement process while long-term improvements are made to DOD’s contract pay, disbursing, and accounting processes and systems. Accordingly, we recommend that the Secretary of Defense direct the DOD Comptroller to develop a plan to meet this target. As a first step, the Comptroller should reduce the threshold at the DFAS Columbus Center to $4 million and continuously lower the threshold in accordance with the plan. We also recommend that the Secretary of Defense direct the Comptroller to develop similar plans for prevalidating all disbursements at all the other DOD disbursing activities. These plans should incorporate the DOD IG’s recommendations. Further, we recommend that the Secretary of Defense direct the Comptroller to ensure that existing accounting policies and procedures are followed in recording obligations, detecting and correcting errors, and posting complete and accurate accounting information in systems supporting the disbursement process. On May 23, 1996, we discussed a draft of this report with officials of the Secretary of Defense (Comptroller) and DFAS who are responsible for DOD disbursements and have incorporated their views where appropriate. In general, these officials agreed with the report’s findings, conclusions, and recommendations. Regarding the recommendations, they stated that DOD plans to reduce the threshold at the DFAS Columbus Center to $4 million and that they intend to develop a plan to continuously lower the threshold at both the Columbus Center and other DOD disbursing activities. We are sending copies of this report to the Ranking Minority Members of the Subcommittee on National Security, House Committee on Appropriations, and the Subcommittee on Government, Management, Information and Technology, House Committee on Government Reform and Oversight; the Chairman of the Senate Committee on Governmental Affairs; the Secretary of Defense; the Director of the Office of Management and Budget, and other interested parties. We will make copies available to others upon request. Please contact me at (202) 512-6240 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix I. Our objectives were to (1) assess DOD’s progress in reducing problem disbursements and (2) review DOD’s implementation of the requirements in section 8137 of Public Law 103-335 and section 8102 of Public Law 104-61, for DOD to match disbursements over $5 million with obligations in the official accounting records prior to making payments. This review was a joint effort between the DOD IG and GAO. The DOD IG was generally responsible for completing the field work at Army and Navy activities and supporting locations while GAO was generally responsible for completing the field work at Air Force and Marine Corps activities and supporting locations. Discussions related to Army and Navy prevalidation issues are based primarily on the DOD IG’s work. To satisfy ourselves as to the sufficiency, relevance, and competence, of the IG’s work at Army and Navy, we reviewed the IG’s audit program, workpapers, and draft report. We also combined our efforts with the IG to complete work at other DOD locations visited during the review. To assess DOD’s progress in resolving problem disbursements, we met with the DFAS officials responsible for managing problem disbursements to discuss and assess their various initiatives aimed at reducing problem disbursement balances. We (1) analyzed various DOD reports on problem disbursements to identify and document any changes in problem disbursement balances, (2) spoke with DFAS officials to identify systemic problems hindering DOD’s ability to reduce problem disbursement balances, and (3) reviewed internal DOD audit reports and the Secretary of Defense’s fiscal year 1995 Annual Statement of Assurance under the Federal Manager’s Financial Integrity Act. To assess the DOD progress in addressing these weaknesses, we spoke with DFAS officials at DFAS centers and headquarters and reviewed various progress reports and other internal documents of disbursement problems and corrective actions taken or planned. The dollar values of disbursements discussed in this report were obtained from agency reports or compiled from agency records. We did not verify the accuracy of disbursement data included in agency reports or records because the data consisted of hundreds of thousands of disbursement transactions. Consequently, we cannot provide any assurance that the $25.4 billion of problem disbursements that had not been properly matched to obligations as of January 31, 1996, are correct. To determine if DOD’s implementation of the prevalidation program complied with legislative requirements, we reviewed DOD’s implementation plan and other DOD policies and procedures for implementing the program. We also visited various activities and observed their prevalidation processes. At these locations, we judgmentally selected large dollar transactions for detailed analysis. Our analysis included reviewing the official accounting records to determine if the payment had been properly validated and correctly posted to the accounting records. We met with responsible DFAS and military service officials to discuss and resolve identified discrepancies. Our work and that of the DOD IG was performed at the offices of the DOD Comptroller, Washington, D.C.; Assistant Secretary of the Army (Financial Management and Comptroller), Washington, D.C.; DFAS Headquarters, Arlington, Virginia, and the following DFAS Centers: DFAS Columbus, Columbus, Ohio; DFAS Cleveland, Cleveland, Ohio; DFAS Indianapolis, Indianapolis, Indiana; DFAS Kansas City, Kansas City, Missouri; and DFAS Denver, Denver, Colorado. We also performed work at the Air Force Materiel Command and DFAS Dayton Operating Location, Dayton, Ohio; DFAS Operating Location, Charleston, South Carolina; DFAS Operating Location, Norfolk, Virginia; DFAS Operating Location and Defense Megacenter, St. Louis, Missouri; Defense Accounting Office, U. S. Army Missile Command and Defense Megacenter, Huntsville, Alabama; Assistant Secretary of the Navy (Financial Management and Comptroller), Washington, D.C.; Navy Strategic Systems Program Office, Arlington, Virginia; Navy International Logistics Command, Philadelphia, Pennsylvania; Quantico Marine Base, Quantico, Virginia; and Camp Lejune Marine Base, Jacksonville, North Carolina. Larry W. Logsdon, Assistant Director Gregory E. Pugnetti, Assistant Director Roger Corrado, Senior Evaluator Cristina T. Chaplain, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to congressional requests, GAO reviewed, in conjunction with the Department of Defense's (DOD) Inspector General (IG), DOD efforts to reduce problem disbursements and its implementation of a statutory requirement to match each disbursement exceeding $5 million to the appropriate obligation before the disbursement is made. GAO found that: (1) DOD reduced its problem disbursements from $37.8 billion to $23.1 billion as of September 1995; (2) DOD disbursement problems persist due to long-standing system weaknesses and DOD failure to comply with basic accounting procedures for validating, reconciling, and reporting transactions; (3) DOD has automated the prevalidation process for the Defense Finance and Accounting Service's (DFAS) Columbus Center contract payment system and eight other primary contract accounting systems to handle their large volume of transactions; (4) there are deficiencies in the automated programs for prevalidating Army and Air Force disbursements; (5) DOD has lowered the prevalidation threshold to $1 million for all disbursement centers except DFAS-Columbus; (6) this limited implementation hampers DOD ability to resolve its disbursement problems, since DFAS-Columbus is responsible for about 40 percent of DOD contractor and vendor payments; (7) from July 1995 through January 1996, DFAS-Columbus prevalidated only about one-third of the total dollar amount of its disbursements; and (8) to resolve disbursement problems, DOD needs to prevalidate as many transactions as practical, further lower the prevalidation threshold, and follow basic accounting procedures until it has corrected serious weaknesses in its accounting and contracting systems.
CPP was the primary initiative under TARP for stabilizing the financial markets and banking system. Treasury created the program in October 2008 to stabilize the financial system by providing capital on a voluntary basis to qualifying regulated financial institutions through the purchase of senior preferred shares and subordinated debt. On October 14, 2008, Treasury allocated $250 billion of the $700 billion in overall TARP funds for CPP but adjusted its allocation to $218 billion in March 2009 to reflect lower estimated funding needs based on actual participation and the expectation that institutions would repay their investments. The program was closed to new investments on December 31, 2009, and, in total, Treasury invested $205 billion in 707 financial institutions over the life of the program. Through June 30, 2010, 83 institutions had repaid about $147 billion in CPP investments, including 76 institutions that repaid their investments in full. Under CPP, qualified financial institutions were eligible to receive an investment of between 1 and 3 percent of their risk-weighted assets, up to a maximum of $25 billion. In exchange for the investment, Treasury generally received shares of senior preferred stock that were due to pay dividends at a rate of 5 percent annually for the first 5 years and 9 percent annually thereafter. In addition to the dividend payments, EESA required the inclusion of warrants to purchase shares of common stock or preferred stock, or a senior debt instrument to give taxpayers additional protection against losses and an additional potential return on the investments. Institutions are allowed to repay CPP investments with the approval of their primary federal bank regulators and afterward to repurchase warrants at fair market value. While this was Treasury’s program, the federal bank regulators played a key role in the CPP application and approval process. The federal banking agencies that were responsible for receiving and reviewing CPP applications and recommending approval or denial were the Federal Reserve, which supervises and regulates banks authorized to do business under state charters and that are members of the Federal Reserve System, as well as bank and financial holding companies; FDIC, which provides primary federal oversight of any state-chartered banks insured by FDIC that are not members of the Federal Reserve System; OCC, which is responsible for chartering, regulating, and supervising commercial banks with national charters; and OTS, which charters federal savings associations (thrifts) and regulates and supervises federal and state thrifts and savings and loan holding companies. Treasury, in consultation with the federal banking regulators, developed a standardized framework for processing applications and disbursing CPP funds. Treasury encouraged financial institutions that were considering applying to CPP to consult with their primary federal bank regulators. The bank regulators also had an extensive role in reviewing the applications of financial institutions applying for CPP and making recommendations to Treasury. Eligibility for CPP funds was based on the regulator’s assessment of the applicant’s strength and viability, as measured by factors such as examination ratings, financial performance ratios, and other mitigating factors, without taking into account the potential impact of TARP funds. Institutions deemed to be the strongest, such as those with the highest examination ratings, received presumptive approval from the banking regulators, and their applications were forwarded to Treasury. Institutions with lower examination ratings or other concerns that required further review were referred to the interagency CPP Council, which was composed of representatives from the four banking regulators, with Treasury officials as observers. The CPP Council evaluated and voted on the applicants, and applications from institutions that received “approval” recommendations from a majority of the regulatory representatives were forwarded to Treasury. Treasury provided guidance to regulators and the CPP Council to use in assessing applicants that permitted consideration of factors such as signed merger agreements or confirmed investments of private capital, among other things, to offset low examination ratings or other weak attributes. Finally, institutions that the banking regulators determined to be the weakest and ineligible for a CPP investment, such as those with the lowest examination ratings, were to receive a presumptive denial recommendation. Figure 1 provides an overview of the process for assessing and approving CPP applications. The banking regulator or the CPP Council sent approval recommendations to Treasury’s Investment Committee, which comprised three to five senior Treasury officials, including OFS’s chief investment officer (who served as the committee chair) and the assistant secretaries for financial markets, economic policy, financial institutions, and financial stability at Treasury. After receiving recommended applications from regulators or the CPP Council, OFS reviewed documentation supporting the regulators’ recommendations but often collected additional information from regulators and the council before submitting applications to the Investment Committee. The Investment Committee could also request additional analysis or information in order to clear any concerns before deciding on an applicant’s eligibility. After completing its review, the Investment Committee made recommendations to the Assistant Secretary for Financial Stability for final approval. Once the Investment Committee recommended preliminary approval, Treasury and the approved institution initiated the closing process to complete the legal aspects of the investment and disburse the CPP funds. At the time of the program’s announced establishment, nine major financial institutions were initially included in CPP. While these institutions did not follow the application process that was ultimately developed, Treasury included these institutions because federal banking regulators and Treasury considered them to be essential to the operation of the financial system, which at the time had effectively ceased to function. At the time, these nine institutions held about 55 percent of U.S. banking assets and provided a variety of services, including retail and wholesale banking, investment banking, and custodial and processing services. According to Treasury officials, the nine financial institutions agreed to participate in CPP in part to signal the importance of the program to the stability of the financial system. Initially, Treasury approved $125 billion in capital purchases for these institutions and completed the transactions with eight of them on October 28, 2008, for a total of $115 billion. The remaining $10 billion was disbursed after the merger of Bank of America Corporation and Merrill Lynch & Co., Inc., was completed in January 2009. The institutions that received CPP capital investments varied in terms of ownership type, location, and size. The 707 institutions that received CPP investments were split almost evenly between publicly held and privately held institutions, with slightly more private firms. They included state- chartered and national banks and U.S. bank holding companies located in 48 states, the District of Columbia, and Puerto Rico (see fig. 2). Most states had fewer than 20 CPP firms, but 13 states had 20 or more. California had the most, with 72, followed by Illinois (45), Missouri (32), North Carolina (31), and Pennsylvania (31). Montana and Vermont were the only 2 states that did not have institutions that participated in CPP. The total amount of CPP funds disbursed to institutions also varied by state. The amount of CPP funds invested in institutions in most states was less than $500 million, but institutions in 17 states received more than $1 billion each. Institutions in states that serve as financial services centers such as New York and North Carolina received the most CPP funds. The median amount of CPP funds invested in institutions by state was $464 million. The size of CPP institutions also varied widely. The risk-weighted assets of firms we reviewed that were funded through April 30, 2009, ranged from $10 million to $1.4 trillion. However, most of the institutions were relatively small. For example, about half of the firms that we reviewed had risk-weighted assets of less than $500 million, and almost 70 percent had less than $1 billion. Only 30 percent were medium to large institutions (more than $1 billion in risk-weighted assets). Because the investment amount was tied to the firm’s risk-weighted assets, the amount that firms received ranged widely, from about $300,000 to $25 billion. The average investment amount for all of the 707 CPP participants was $290 million, although half of the institutions received less than $11 million. The 25 largest institutions received almost 90 percent of the total amount of CPP investments, and 9 of these firms received almost 70 percent of the funds. The characteristics Treasury and regulators used to evaluate applicants indicated that approved institutions had bank or thrift examination ratings that generally were satisfactory, or within CPP guidelines. Treasury and regulators used various measures of institutional strength and financial condition to evaluate applicants. These included supervisory examination ratings and financial performance ratios assessing an applicant’s capital adequacy and asset quality. While some examination results were more than a year old, regulatory officials told us that they had taken steps to mitigate the effect of these older ratings, such as collecting updated information. Almost all of the 567 institutions we reviewed had overall examination ratings for their largest bank or thrift that were satisfactory or better (see fig. 3). The CAMELS ratings range from 1 to 5, with 1 indicating a firm that is sound in every respect, 2 denoting an institution that is fundamentally sound, and 3 or above indicating some degree of supervisory concern. Of the CPP firms that we reviewed, 82 percent had an overall rating of 2 from their most recent examination before applying to CPP, and an additional 11 percent had the strongest rating. Seven percent had an overall rating of 3 and no firms had a weaker rating. We also found relatively small differences in overall examination ratings for institutions by size or ownership type. For example, institutions that were above and below the median risk-weighted assets of $472 million both had average overall ratings of about 2. Also, public and private firms both had average overall examination ratings of about 2. Bank or thrift examination ratings for individual components—such as asset quality and liquidity—exhibited similar trends. In particular, each of the individual components had an average rating of around 2. Institutions tended to have weaker ratings for the earnings component, which had an average of 2.2, than for the other components, which averaged between 1.8 and 1.9. Public and private institutions exhibited similar results for the average component ratings, although private institutions tended to have stronger ratings on all components except for earnings and sensitivity to market risk. Differences in average ratings by bank size also were small. For example, smaller institutions had stronger average ratings for the capital and asset quality components, but larger institutions had stronger average ratings for earnings and sensitivity to market risk. Holding companies receiving CPP investments typically also had satisfactory or better examination ratings. The Federal Reserve uses its own rating system when evaluating bank holding companies. Almost 80 percent of holding companies receiving CPP funds had an overall rating of 2 (among those with a rating), and an additional 14 percent had an overall rating of 1. The individual component ratings for holding companies (for example, for risk management, financial condition, and impact) also were comparable with overall ratings, with most institutions for which we could find a rating classified as satisfactory or better. Specifically, over 90 percent of the ratings for each of the components were 1 or 2, with most rated 2. Many examination ratings were more than a year old, a fact that could limit the degree to which the ratings accurately reflect the institutions’ financial condition, especially at a time when the economy was deteriorating rapidly. Specifically, about 25 percent of examination ratings were older than 1 year prior to the date of application, and 5 percent were more than 16 months old. On average, examination ratings were about 9 months older than the application date. Regulators used examination ratings as a key measure of an applicant’s financial condition and viability, and the age of these ratings could affect how accurately they reflect the institutions’ current state. For example, assets, liabilities, and operating performance generally are affected by the economic environment and depend on many factors, such as institutional risk profiles. Stressed market conditions such as those existing in the broad economy and financial markets during and before CPP implementation could be expected to have negative impacts on many of the applicants, making the age of examination ratings a critical factor in evaluating the institutions’ viability. Further, some case decision files for CPP firms were missing examination dates. Specifically, 104 applicants’ case decision files out of the 567 we reviewed lacked a date for the most recent examination results. Treasury and regulatory officials told us that they took various actions to collect information on applicants’ current condition and to mitigate any limitations of older examination results. Efforts to collect additional information on the financial condition of applicants included waiting for results of scheduled examinations or relying on preliminary CAMELS exam results, reviewing quarterly financial results such as recent information on asset quality, and sometimes conducting brief visits to assess applicants’ condition. Officials from one regulator explained that communication with the agency’s regional examiners and bank management on changes to the firm’s condition was the most important means of allaying concerns about older examination results. However, officials from another regulator stated that they did use older examination ratings, depending on the institution’s business model, lending environment, banking history, and current loan activity. For example, the officials said they would use older ratings if the institution was a small community bank with a history of conservative underwriting standards and was not lending in a volatile real estate market. As with the examination ratings, almost all of the institutions we reviewed had a rating for compliance with the Community Reinvestment Act (CRA) of satisfactory or better. Over 80 percent of firms received a satisfactory rating and almost 20 percent had an outstanding rating. Only two institutions had an unsatisfactory rating. Average CRA ratings also were similar across institution types and sizes. Performance ratios for the CPP firms we reviewed varied but typically were well within CPP guidelines. In assessing CPP applicants, Treasury and regulators focused on a variety of ratios based on regulatory capital levels, and institutions generally were well above the minimum required levels for these ratios. Regulators generally used performance ratio information from regulatory filings for the second or third quarters of 2008. Two of these ratios are based on a key type of regulatory capital known as Tier 1, which includes the core capital elements that are considered the most reliable and stable, primarily common stock and certain types of preferred stock. Specifically, for the Tier 1 risk-based capital ratio, banks or thrifts and holding companies had average ratios that were more than double the regulatory minimum of 4 percent with only one firm below that minimum level. Further, only two institutions were below 6.5 percent (see fig. 4). Although almost all firms had Tier 1 risk-based capital ratios that exceeded the minimum level, the ratios ranged widely, from 3 percent to 43 percent. Similarly, banks or thrifts and holding companies had average Tier 1 leverage ratios that were more than double the required 4 percent, and only 3 firms were below 4 percent. The ratios also ranged widely, from 2 percent to 41 percent. Finally, for the total risk-based capital ratio, banks or thrifts and holding companies had average ratios of 12 percent, well above the 8 percent minimum, and only two firms were below 8 percent. These ratios ranged from 4 percent to 44 percent. Asset-based performance ratios for most CPP institutions also generally remained within Treasury’s guidelines, although more firms did not meet the criteria for these ratios than did not meet the criteria for capital ratios. Treasury and the regulators established maximum guideline amounts for the three performance ratios relating to assets that they used to evaluate applicants. These ratios measure the concentration of troubled or risky assets as a share of capital and reserves—classified assets, nonperforming loans (including non-income-generating real estate, which is typically acquired through foreclosure), and construction and development loans. For each of these performance ratios, both the banks or thrifts and holding companies had average ratios that were less than half of the maximum guideline, well within the specified limits. For example, banks/thrifts and holding companies had average ratios of 25 and 32 percent, respectively, for classified assets, which had a maximum guideline of 100 percent. The substantial majority of banks or thrifts and holding companies also were well below the maximum guidelines for the asset ratios. For example, almost 90 percent of banks/thrifts and over 80 percent of holding companies had classified assets ratios below 50 percent. However, while only 3 firms missed the guidelines for any of the capital ratios, 38 banks/thrifts and holding companies missed the nonperforming loan ratio, 8 missed the construction and development loan ratio, and 1 missed the classified assets ratio. A small group of CPP participants exhibited weaker attributes relative to other approved institutions (see table 1). For most of these cases, Treasury or regulators described factors that mitigated the weaknesses and supported the applicant’s viability. Specifically, we identified 66 CPP institutions—12 percent of the firms we reviewed—that either (1) did not meet the performance ratio guidelines used to evaluate applicants, (2) had an unsatisfactory overall bank or thrift examination rating, or (3) had a formal enforcement action involving safety and soundness concerns. We use these attributes to identify these 66 firms as marginal institutions, although the presence of these attributes does not necessarily indicate that a firm was not viable or that it was ineligible for CPP participation. However, they generally may indicate firms that either had weaker attributes than other approved firms or required closer evaluation by Treasury and regulators. Nineteen of the institutions met multiple criteria, including those that missed more than one performance ratio for the largest bank/thrift or holding company. The most common criteria for the firms identified as marginal was an unsatisfactory overall examination rating or an unsatisfactory nonperforming loan ratio. A far smaller number of firms exceeded the construction and development loan ratio or had experienced a formal enforcement action related to safety and soundness concerns. One bank and two holding companies missed the capital or classified assets ratios. In their evaluations of CPP applicants, Treasury and regulators documented their reasons for approving institutions with marginal characteristics. They typically identified three types of mitigating factors that supported institutions’ overall viability: (1) the quality of management and business practices; (2) the sufficiency of capital and liquidity; and (3) performance trends, including asset quality. The most frequently cited attributes related to management quality and capital sufficiency. High-quality management and business practices. In evaluating marginal applicants, regulators frequently considered the experience and competency of the applicants’ senior management team. Officials from one bank regulator said that they might be less skeptical of an applicant’s prospects if they believed it had high-quality management. For example, they used their knowledge of institutions and the quality of their management to mitigate economic concerns for banks in the geographic areas most severely affected by the housing market decline. Commonly identified strengths included the willingness and ability of management to respond quickly to problems and concerns that regulators identified such as poor asset quality or insufficient capital levels. The evaluations of several marginal applicants described management actions to aggressively address asset quality problems as an indication of an institution’s ability to resolve its weaknesses. Regulators also had a positive view of firms whose boards of directors implemented management changes such as replacing key executives or hiring more experienced staff in areas such as credit administration. Finally, regulators evaluated the quality of risk management and lending practices in determining management strength. Capital and liquidity. Regulators often reviewed the applicant’s capital and liquidity when evaluating whether an institution’s weaknesses might affect its viability. In particular, regulators and Treasury considered the sufficiency of capital to absorb losses from bad assets and the ability to raise private capital. As instructed by Treasury guidance, regulators evaluated an institution’s capital levels prior to the addition of any CPP investment. Although an institution might have high levels of nonperforming loans or other problem assets, regulators’ concerns about viability might be eased if it also had a substantial amount of capital available to offset related losses. Likewise, capital from private sources could shore up an institution’s capital buffers and provide a signal to the market that it could access similar sources if necessary. When evaluating the sufficiency of a marginal applicant’s capital, regulators also assessed the amount of capital relative to the firm’s risk profile, the quality of the capital, and the firm’s dependence on volatile funding sources. Institutions with a riskier business model that included, for instance, extending high-risk loans or investing in high-risk assets generally would require higher amounts of capital as reserves against losses. Conversely, an institution with a less risky strategy or asset base might need somewhat less capital to be considered viable. Regulators reviewed the quality of a firm’s capital because some forms of capital, such as common shareholder’s equity, can absorb losses more easily than other types, such as subordinated debt or preferred shares, which may have restrictions or limits on their ability to take losses. Finally, regulators considered the nature of a firm’s funding sources. They viewed firms that financed their lending and other operations with stable funding sources, such as core deposit accounts or long-term debt, as less risky than firms that obtained financing through brokered deposits or wholesale funding, which could be more costly or might need to be replaced more frequently. Performance trends. Regulators also examined recent trends in performance when evaluating marginal applicants. For example, regulators considered strong or improving trends in asset quality, earnings, and capital levels, among others, as potentially favorable indicators of viability. These trends included reductions in nonperforming and classified assets, consistent positive earnings, reductions in commercial real estate concentrations, and higher net interest margins and return on assets. In some cases, regulators identified improvements in banks’ performance through preliminary examination ratings. Officials from one bank regulator stated that the agency refrained from making recommendations until it had recent and complete examination data. For example, if an examination was scheduled for an applicant that had raised regulatory concerns or questions, the agency would wait for the updated results before completing its review and making a recommendation to Treasury. Regulators and Treasury raised specific questions about the viability of a small number of institutions that ultimately were approved and received their CPP investments between December 19, 2008, and March 27, 2009. Most of the questions about viability involved poor asset quality, such as nonperforming loans or bad investments, and lending that was highly concentrated in specific product types, such as commercial real estate (see table 2). For these institutions, various mitigating factors were used to provide support for the firm’s ultimate approval. For example, regulators and Treasury identified the addition of private capital, strong capital ratios, diversification of lending portfolios, and updated examination results as mitigating factors in approving the institutions. One of these institutions had weaker characteristics than the others, and regulators and Treasury appeared to have more significant concerns about its viability. Ultimately, regulators and the CPP Council recommended approval of this institution based, in part, on criteria in Section 103 of EESA, which requires Treasury to consider providing assistance to financial institutions having certain attributes such as serving low- and moderate-income populations and having assets less than $1 billion. Through July 2010, 4 CPP institutions had failed, but an increasing number of CPP firms have missed their scheduled dividend or interest payments, requested to have their investments restructured by Treasury, or appeared on FDIC’s list of problem banks. First, the number of institutions missing the dividend or interest payments due on their CPP investments has increased steadily, rising from 8 in February 2009 to 123 in August 2010, or 20 percent of existing CPP participants. Between February 2009 and August 2010, 144 institutions did not pay at least one dividend or interest payment by the end of the reporting period in which they were due, for a total of 413 missed payments. As of August 31, 2010, 79 institutions had missed three or more payments and 24 had missed five or more. Through August 31, 2010, the total amount of missed dividend and interest payments was $235 million, although some institutions made their payments after the scheduled payment date. Institutions are required to pay dividends only if they declare dividends, although unpaid cumulative dividends accrue and the institution must pay the accrued dividends before making dividend payments to other types of shareholders in the future, such as holders of common stock. Federal and state bank regulators also may prevent their supervised institutions from paying dividends to preserve their capital and promote their safety and soundness. According to the standard terms of CPP, after participants have missed six dividend payments—consecutive or not—Treasury can exercise its right to appoint two members to the board of directors for that institution. In May 2010, the first CPP institution missed six dividend payments, but as of August 2010, Treasury had not exercised its right to appoint members to its board of directors. An additional seven institutions missed their sixth dividend payment in August 2010. Treasury officials told us that they are developing a process for establishing a pool of potential directors that Treasury could appoint on the boards of institutions that missed at least six dividend payments. They added that these potential directors will not be Treasury employees and would be appointed to represent the interests of all shareholders, not just Treasury. Treasury officials expect that any appointments will focus on banks with CPP investments of $25 million or greater, but Treasury has not ruled out making appointments for institutions with smaller CPP investments. We will continue to monitor and report on Treasury’s progress in making these appointments in future reports. Although none of the 4 institutions that have failed as of July 31, 2010, were identified as marginal cases, 39 percent of the 66 approved institutions with marginal characteristics have missed at least one CPP dividend payment, compared with 20 percent of CPP participants overall. Through August 2010, 26 of the 144 institutions that had missed at least one dividend payment were institutions identified as marginal. Of these 26 marginal approvals, 20 have missed at least two payments, and 14 have missed at least four. Several of the marginal approvals also have received formal enforcement actions since participating in CPP. As of April, regulators filed formal actions against nine of the marginal approvals, including four cease-and-desist orders and four written agreements. Seven of these institutions also missed at least one dividend payment. However, none of the approvals identified as marginal had filed for bankruptcy or were placed in FDIC receivership as of July 31, 2010. Second, since June 2009, at least 16 institutions have formally requested that Treasury restructure their CPP investments, and most of the institutions have made their requests in recent months. Specifically, as of July, 9 of the 11 requests received this year were received since April. Treasury officials said that institutions have pursued a restructuring primarily to improve the quality of their capital and attract additional capital from other investors. Treasury has completed six of the requested restructurings and entered into agreements with 2 additional institutions that made requests. According to officials, Treasury considers multiple factors in determining whether to restructure a CPP investment. These factors include the effect of the proposed capital restructuring on the institution’s Tier 1 and common equity capital and the overall economic impact on the U.S. government’s investment. The terms of the restructuring agreements most frequently involve Treasury exchanging its CPP preferred shares for either mandatory convertible preferred shares— which automatically convert to common shares if certain conditions such as the completion of a capital raising plan are met—or trust preferred securities—which are issued by a separate legal entity established by the CPP institution. Finally, the number of CPP institutions on FDIC’s list of problem banks has increased. At December 31, 2009, there were 47 CPP firms on the problem list. This number had grown to 71 firms by March 31, 2010, and to 78 at June 30, 2010. The FDIC tracks banks that it designates as problem institutions based on their composite examination ratings. Institutions designated as problem banks have financial, operational, or managerial weaknesses that threaten their continued viability and include firms with either a 4 or 5 composite rating. Reviews of regulators’ approval recommendations helped ensure consistent evaluations and mitigate risk from Treasury’s limited guidance for assessing applicants’ viability. Reviews of regulators’ recommendations to fund institutions are an important part of CPP’s internal control activities aimed at providing reasonable assurance that the program is performing as intended and accomplishing its goals. The process that Treasury and regulators implemented established centralized control mechanisms to help ensure consistency in the evaluations of approved applicants. For example, regulators established their own processes for evaluating applicants, but they generally had similar structures including initial contact and review by regional offices followed by additional centralized review at the headquarters office for approved institutions. FDIC, OTS, and the Federal Reserve conducted initial evaluations and prepared the case decision memos at regional offices (or Reserve Banks in the case of the Federal Reserve), while the regulators’ headquarters (or Board of Governors) performed secondary reviews and verification. At OCC, district offices did the initial analysis of applicants and provided a recommendation to headquarters, which prepared the case decision memo using input from the district. All of the regulators also used review panels or officials at headquarters to review the analyses and recommendations before submission to the CPP Council or Treasury. Applicants recommended for approval by regulators also received further evaluation at the CPP Council or Treasury. Regulators sent to the CPP Council applications that they had approved but that had certain characteristics identified by Treasury as warranting further review by the council. These characteristics included indications of relative weakness, such as unsatisfactory examination ratings and performance ratios. At the council, representatives from all four federal bank regulators discussed the viability of applicants and voted on recommending them to Treasury for approval. As Treasury officials explained, the CPP Council was the deliberative forum for addressing concerns about marginal applicants whose eligibility for CPP was unclear. The council’s charter describes its purpose as acting as an advisory body to Treasury for ensuring that CPP guidelines are applied effectively and consistently across bank regulators and applicants. By requiring the regulators to reach consensus when recommending applicants whose approval was not straightforward, the CPP Council helped ensure that the final outcome of applicants was informed by multiple bank regulators and generally promoted consistency in decision making. After regulators or the CPP Council submitted a recommendation to Treasury, the applicant received a final round of review by Treasury’s CPP analysts and the Investment Committee. CPP analysts conducted their own reviews of applicants and the case files forwarded from the regulators, including the case decision memos. They collected additional information for their reviews from regulators’ data systems and publicly available sources and also gathered information from regulators to clarify the analysis in the case files. According to Treasury officials, the CPP analysts were experienced bank examiners serving on detail from each of the bank regulators except OCC. Treasury officials explained that CPP analysts did not make decisions about preliminary approvals or preliminary disapprovals. Only the Investment Committee made those decisions. In the final review stage, the Investment Committee evaluated all of the applicants forwarded by regulators or the CPP Council. On the basis of its review of the regulators’ recommendations and analysis and additional information collected by Treasury CPP analysts, the Investment Committee recommended preliminary approval or denial to applicants, subject to the final decision of the Assistant Secretary for Financial Stability. By reviewing and issuing a preliminary decision on all forwarded applicants, the Investment Committee represented another important control, much like the CPP Council. Unlike the CPP Council, however, the Investment Committee deliberated on all applicants referred by regulators rather than just those meeting certain marginal criteria. The reviews by the CPP Council, analysts at OFS, and the Investment Committee were important steps to limit the risk of inconsistent evaluations by different regulators. This risk stemmed from the limited guidance that Treasury provided to regulators concerning the application review process. Specifically, the formal written guidance that Treasury initially provided to regulators consisted of broad high-level guidance, which was supplemented with other informal guidance to address specific concerns. The written guidance provided by Treasury established the institution’s strength and overall viability as the baseline criteria for the eligibility recommendation. Regulators said that while the guidance was useful in providing a broad framework or starting point for their reviews, they could not determine an applicant’s viability using Treasury’s written guidance alone. Officials from several regulators said that they also relied on regulatory experience and judgment when evaluating CPP applicants and making recommendations to Treasury. Treasury officials told us that they believed they were not in a position to provide more specific guidance to regulators on how to evaluate the viability of the institutions they oversaw. Treasury officials further explained that with many different kinds of institutions and unique considerations, regulators needed to make viability decisions on an individual basis. A 2009 audit by the Federal Reserve’s Inspector General (Fed IG) assessing the Federal Reserve’s process and controls for reviewing CPP applications similarly found that Treasury provided limited guidance in the early stages of the program regarding how to determine applicants’ viability. As a result, the Federal Reserve and other regulators developed their own procedures for analyzing CPP applications. The report also found that formal, detailed, and documented procedures would have provided the Federal Reserve with additional assurance that CPP applications would be analyzed consistently and completely. However, the multiple layers of reviews involving the regulators, the CPP Council, and Treasury staff helped compensate for the risk of inconsistent evaluation of applicants that received recommendations for CPP investments. The Fed IG recommended that the Federal Reserve incorporate lessons learned from the CPP application review process to its process for reviewing repurchase requests. The Federal Reserve generally agreed with the report’s findings and recommendations. As Treasury fully implemented its CPP process, it and the regulators compiled documentation of the analysis supporting their decisions to approve program applicants. For example, regulators consistently used a case decision memo to provide Treasury with standard documentation of their review and recommendations of CPP applicants. This document contained basic descriptive and evaluative information on all applicants forwarded by regulators, including identification numbers, examination and compliance ratings, recent and post-investment performance ratios, and a summary of the primary regulator’s evaluation and recommendation. Although the case decision memo contained standard types of information, the amount and detail of the information that regulators included in the form evolved over time. According to regulators and Treasury, they engaged in an iterative process whereby regulators included additional information after receiving feedback from Treasury on what they should describe about their assessment of an applicant’s viability. For example, regulators said that often Treasury wanted more detailed explanations for more difficult viability decisions. According to bank regulatory officials, other changes included additional discussion of specific factors relevant to the viability determination, such as information on identified weaknesses and enforcement actions, analysis of external factors such as economic and geographic influences, and consideration of nonbank parts of holding companies. Treasury officials explained that as CPP staff learned about the types of information the Investment Committee wanted to see, they would communicate it to the regulators for inclusion in case decision memos. Our review of CPP case files indicated that some case decision memos were incomplete and missing important information, but typically only for applicants approved early in the program. For instance, several case decision memos contained only one or two general statements supporting viability, largely for the initial CPP firms. Eventually, the case decision memos included several paragraphs, and some contained multiple pages, with detailed descriptions of the applicant’s condition and viability assessment. Most of the cases in which the regulator did not explain its support for an applicant’s viability occurred in the first month of the program. Some case decision memos lacked other important information, although these memos also tended to be from early in the program. For example, multiple case decision memos were missing either an overall examination rating, all of the component examination ratings, or a performance ratio related to capital levels. Most or all of those were approved prior to December 2008. Further, 104 of 567 case files we reviewed lacked examination ratings dates, and almost all of these firms were approved before the end of December 2008. Missing CRA dates, which occurred in 214 cases, exhibited a similar pattern. For applications that regulators sent to Treasury with an approval recommendation, Treasury staff used a “team analysis” form to document their review before submitting the applications to the Investment Committee for its consideration. According to Treasury officials, the team analysis evolved over time as CPP staff became more experienced and different examiners made their own modifications to the form. For example, as the CPP team grew in size, additional fields were added to document multiple levels of review by other examiners. As with the case decision memos, the consistency of information in the team analysis improved with time. For instance, team analysis documents did not include calculations of allowable investment amounts for almost 60 files that we reviewed that Treasury had approved by the end of December 2008. Finally, a small number of case files did not contain an award letter, but all of those approvals had also occurred before the end of December 2008. Treasury and regulators compiled meeting minutes for the CPP Council and Investment Committee, although they did not fully document some early Investment Committee meetings. The minutes described discussions of policy and guidance related to TARP and CPP and also the review and approval decisions for individual applicants. However, records do not exist for four meetings of the Investment Committee that occurred between October 23, 2008, and November 12, 2008. According to Treasury, no minutes exist for those meetings. We did not find any missing meeting minutes for the CPP Council, although at the early meetings, regulators did not collect the initials of voting members to document their recommendations to approve or disapprove applicants they reviewed. Within several weeks however, regulators began using the CPP Council review decision sheets to document council members’ votes in addition to the meeting minutes. Although the multiple layers of review for approved institutions enhanced the consistency of the decision process, applicants that withdrew from consideration in response to a request from their regulator received no review by Treasury or other regulators. To avoid a formal denial, regulators recommended that applicants withdraw when they were unable to recommend approval or believed that Treasury was unlikely to approve the institution. Some regulators said that they also encouraged institutions not to formally submit applications if approval appeared unlikely. Applicants could insist that the regulator forward their application to the CPP Council and ultimately to the Investment Committee for further consideration even if the regulator had recommended withdrawal. However, Treasury officials said that they did not approve any applicants that received a disapproval recommendation from their regulator or the CPP Council. Regulators also could recommend that applicants withdraw after the CPP Council or Investment Committee decided not to recommend approval of their application. One regulator stated that all the applicants it suggested withdraw did so rather than receive a formal denial. Treasury officials also said that institutions receiving a withdrawal recommendation generally withdrew and that no formal denials were issued. Almost half of all applicants withdrew from CPP consideration before regulators forwarded their applications to the CPP Council or Treasury. Regulators had recommended withdrawal in about half of these cases where information was available. Over the life of the program, regulators received almost 3,000 CPP applications, about half of which they sent to the CPP Council or directly to Treasury (see table 3). The remaining applicants withdrew either voluntarily or after receiving a recommendation to withdraw from their regulator. Three of the regulators—OCC, OTS, and the Federal Reserve—indicated that about half of their combined withdrawals were the result of their recommendations. FDIC, which was the primary regulator for most of the applicants, did not collect information on the reasons for applicants’ withdrawals. According to Treasury officials, those applicants that chose to withdraw voluntarily did so for various reasons, including uncertainty over future program requirements and increased confidence in the financial condition of banks. In addition to institutions that withdrew after applying for CPP, Treasury officials and officials from a regulator indicated that some firmsdecided not to formally apply after discussing their potential application with their regulator. However, regulators did not collect information on the number of firms deciding not to apply after having these discussions. Although applications recommended for approval received multiple reviews and were coordinated among regulators and Treasury, each regulator made its own decision on withdrawal recommendations. Most regulators conducted initial reviews of applicants at their regional offices, and staff at these offices had independent authority to recommend withdrawal for certain cases. Regulatory officials said that regional staff (including examiners and more senior officials) made initial assessments of applicants’ viability using Treasury guidelines and would recommend withdrawal for weak firms with the lowest examination ratings that were unlikely to be approved. Applicants that received withdrawal recommendations might have had weak characteristics relative to those of other firms and might have received a denial from Treasury. But following regulators’ suggestions to withdraw before referral to the CPP Council or Treasury, or to not apply, ensured that they would not receive the centralized reviews that could have mitigated any inconsistencies in their initial evaluations. Further, while regulators had panels or senior officials at their headquarters offices providing central review of approved applicants, most of the regulators allowed their regional offices to recommend withdrawal for weaker applicants or encourage such applicants not to apply, thereby limiting the benefit of that control mechanism. Allowing regional offices to recommend withdrawal without any centralized review may increase the risk of inconsistency within as well as across regulators. In its report on the processing of CPP applications, the FDIC Office of Inspector General found that one of FDIC’s regional offices suggested that three institutions withdraw from consideration that were well capitalized and technically met Treasury guidelines. Regional FDIC management cited poor bank management as the primary concern in recommending that the institutions withdraw. The report concluded that the use of discretion by regional offices in recommending that applicants withdraw increased the risk of inconsistency. The report made two recommendations to enhance controls over the process for evaluating applications: (1) forwarding applications recommended for approval that do not meet one or more of Treasury’s criteria to the CPP Council for additional review and (2) requiring headquarters review of institutions recommended for withdrawal when the institutions technically meet Treasury’s criteria. In commenting on the report, FDIC concurred with the recommendations. Treasury did not collect information on applicants that had received withdrawal recommendations from their regulators or on the reasons for these decisions. According to Treasury officials, Treasury did not receive, request, or review information on applicants that regulators recommended to withdraw and thus could not monitor the types of institutions that regulators were restricting from the program or the reasons for their decisions. The officials said that Treasury did not collect or review information on withdrawal recommendations in part to minimize the potential for external parties to influence the decision-making process. However, such considerations did not prevent Treasury from reviewing information on applicants that regulators recommended for approval, and concerns about external influence could also be addressed directly through additional control procedures rather than by limiting the ability to collect information on withdrawal recommendations. The lack of additional review outside of the individual regulator or oversight of withdrawal requests by Treasury presents the risk that applicants may not have been evaluated in a consistent fashion across regulators. As the agency responsible for implementing CPP, it is equally beneficial for Treasury to understand the reasons that regulators recommended applicants withdraw from the program as it is for Treasury to understand the reasons regulators recommended approval. Collecting and reviewing information on withdrawal requests would have served as an important control mechanism and allowed Treasury to determine whether leaving certain applicants out of CPP was consistent with program goals. It also would have allowed Treasury to determine whether similar applicants were evaluated consistently across different regulators in terms of their decisions to recommend withdrawal. Treasury has indicated that it may use the CPP model for new programs to stimulate the economy and improve conditions in financial markets, and unless corrective actions are taken, such programs may share the same increased risk of similar participants not being treated consistently. Specifically, in February 2010, Treasury announced terms for a new TARP program—the Community Development Capital Initiative (CDCI)—to invest lower-cost capital in Community Development Financial Institutions that lend to small businesses. According to Treasury and regulatory agency officials, Treasury modeled its implementation of the CDCI program after the process it used for CPP, with federal bank regulators—in this case including the National Credit Union Administration (NCUA)—conducting the initial reviews and making recommendations. The CDCI program also uses a council of regulators to review marginal approvals, and an Investment Committee at Treasury reviews all applicants recommended by regulators for approval. As in the case of CPP, control mechanisms exist for reviewing approved applicants, but no equivalent reviews are done for applicants that receive withdrawal recommendations. Thus, the CDCI structure could raise similar concerns about a lack of control mechanisms to mitigate the risk of inconsistency in evaluations by different regulators. The deadline for financial institutions to apply to participate in the CDCI was April 30, 2010, and all disbursements or exchanges of CPP securities for CDCI securities must be completed by September 30, 2010. The Small Business Jobs Act of 2010, enacted on September 27, 2010, established a new Treasury program—the Small Business Lending Fund (SBLF)—to invest up to $30 billion in small institutions to increase small business lending. Treasury may choose to model the new program’s implementation on the CPP process, as it did with the CDCI. Treasury is required to consult with the bank regulators to determine whether an institution may receive a capital investment, and Treasury officials have indicated that they would likely rely on regulators to determine applicants’ eligibility. Unless Treasury also takes steps to coordinate and monitor withdrawal requests by regulators, the disparity that existed in CPP between the control mechanisms for approved applicants and those receiving withdrawal recommendations may persist in this new program, potentially resulting in similar applicants being treated differently. Treasury relies on decisions from federal bank regulators concerning whether to allow CPP firms to repay their investments, but as with withdrawal recommendations, it does not monitor or collect information on regulators’ decisions. The CPP institution submits a repayment request to its primary federal regulator and Treasury (see fig. 5). Bank regulatory officials explained that their agencies use existing supervisory procedures generally applicable to capital reductions as a basis for reviewing CPP repurchase requests and that they approach the decision from the perspective of achieving regulatory rather than CPP goals. Following their review, regulators provide a brief e-mail notification to Treasury indicating whether they object or do not object to allowing an institution to repay its CPP investment. Treasury, in turn, communicates the regulators’ decisions to the CPP firms. As of August 2010, 109 institutions had formally requested that they be allowed to repay their CPP investments, and regulators had approved over 80 percent of the requests (see table 4). According to Treasury officials, there have been no instances where Treasury has raised concerns about a regulator’s decision. Officials at the Federal Reserve—which is responsible for reviewing most CPP repayment requests because requests for bank holding companies go to the holding company regulator— explained that they had not denied any requests but had asked institutions to wait or to raise additional capital. In these cases, institutions typically had experienced significant deterioration since the CPP investment, raising concerns about the adequacy of their capital levels. Under the original terms of CPP, Treasury prohibited institutions from repaying their funds within 3 years unless the firm had completed a qualified equity offering to replace a minimum amount of the capital. However, the American Recovery and Reinvestment Act of 2009 (ARRA) included provisions modifying the terms of CPP repayments. These provisions require that Treasury allow any institution to repay its CPP investment subject only to consultation with the appropriate federal bank regulator without considering whether the institution has replaced such funds from any other source or applying any waiting period. Treasury officials indicated that, as a result of these restrictions, they did not provide guidance or criteria to regulators. The officials explained that even before the ARRA provisions limited Treasury’s role, the standard CPP contract terms allowed institutions to repay the funds at their discretion— subject to regulatory approval—as long as they completed a qualified equity offering or the 3-year time frame had passed. The officials said that the contract terms themselves helped ensure that CPP goals were achieved. While the decision to allow repayment ultimately lies with the bank regulators, Treasury is not statutorily prohibited from reviewing their decision-making process and collecting information or providing feedback about the regulators’ decisions. The two regulators responsible for most repayment requests prepare a case decision memo to document their analysis that is similar to the memo they used to document their evaluations of CPP applicants, but Treasury and agency officials said that Treasury does not request or review the memo or other analyses supporting regulators’ decisions. One regulator indicated that it would provide Treasury with a brief explanation of the basis for its decisions to deny repayment requests and a brief discussion of the supervisory concerns raised by the proposed repayment. But Treasury officials stated that they did not review any information on the basis for regulators’ decisions to approve or deny repayment requests. Without collecting or monitoring such information, Treasury has no basis for considering whether decisions about similar institutions are being made consistently and thus whether CPP firms are being treated equitably. Furthermore, absent information on why regulators made repayment decisions, Treasury cannot provide feedback to regulators on the consistency of regulators’ decision making for similar institutions as part of its consultation role. Regulators have independently developed similar guidelines for evaluating repurchase requests and also established processes for coordinating decisions that involved multiple regulators, and Treasury officials stated that they did not provide input to these guidelines or processes. Regulators said that, in general, they considered the same types of factors when evaluating repayment requests that they considered when reviewing CPP applications. According to the officials, regulators follow existing regulatory requirements for capital reductions—including the repayment of CPP funds—that apply to all of their supervised institutions. In addition to following existing supervisory procedures, officials from the different banking agencies indicated that they also considered a broad set of similar factors, including the following: the institution’s continued viability without CPP funds; the adequacy of the institution’s capital and ability to maintain appropriate capital levels over the subsequent 1 to 2 years, even assuming worsening economic conditions; the level and composition of capital and liquidity; earnings and asset quality; and any major changes in financial condition or viability that had occurred since the institution received CPP funds. Although regulators said that they considered similar factors in their evaluations, without reviewing any information or analysis supporting regulators’ recommendations, Treasury cannot be sure that regulators are using these guidelines consistently for all repayment requests. In addition to setting out guidelines for standard repayment requests, the Federal Reserve established a supplemental process to evaluate repayment requests by the 19 largest bank holding companies that participated in the Supervisory Capital Assessment Program (SCAP). As we reported in our June 2009 review of Treasury’s implementation of TARP, the Federal Reserve required any SCAP institution seeking to repay CPP capital to demonstrate that it could access the long-term debt markets without reliance on debt guarantees by FDIC and public equity markets in addition to other factors. As of September 16, 2010, four bank holding companies that participated in SCAP had not repurchased their CPP investment and one had not repaid funds from TARP’s Automotive Industry Financing Program. Bank regulators said that they also shared their repayment process documents with each other to enhance the consistency of their evaluations and recommendations. For example, the Federal Reserve designed a repayment case decision memo that documents the review of repayment requests and the factors considered in making the decision and shared it with other regulators to promote consistency in their reviews. Officials from OTS explained that they used the Federal Reserve’s repurchase case decision memo as the framework for their document while adding certain elements specific to thrifts such as confirmation that FDIC concurrence was received for thrift holding companies with state bank subsidiaries regulated by FDIC. Bank regulatory officials also stated that bank regulators discussed the repayment process during their weekly conference calls on CPP-related topics. OCC also prepares a memo to document its review of repurchase requests that differs from the form used by the Federal Reserve and OTS; however, it contains similar elements such as an explanation of the analysis and the basis for the decision. Finally, FDIC officials said that they followed existing procedures for capital retirement applications from FDIC-supervised institutions that included safety and soundness considerations. Bank regulators also established processes for coordinating repayment decisions for CPP firms with a holding company and subsidiary bank supervised by different regulators. For example, Federal Reserve officials said that if a holding company it supervised that had a subsidiary bank under another regulator requested to repay CPP funds, the agency would consult with the subsidiary’s regulator before making a final decision. The officials stated that if the regulator of the subsidiary bank objected to the Federal Reserve’s preliminary decision, the regulators would try to reach a consensus. However, as regulator of the holding company that received the CPP investment, the Federal Reserve has the ultimate responsibility for making the decision as it is considered the primary federal regulator in such cases. According to Federal Reserve officials, when OTS is the primary regulator of a subsidiary thrift, it provides a repayment case decision memo to the Federal Reserve for it to consider as it evaluates the repayment request. OCC also provides the Federal Reserve with its analysis of any subsidiary bank for which it is the primary regulator, and FDIC identifies certain individuals who provide their recommendation and are available to discuss the decision. OTS performs a similar coordination role for CPP repayment requests that involve thrift holding companies with nonthrift financial subsidiaries. However, if Treasury does not collect information on or monitor the processes regulators use to make their repayment decisions, Treasury cannot provide any feedback to regulators on the extent to which they are coordinating their decisions. Approved CPP applicants generally had similar examination ratings and other strength characteristics that exceeded guidelines. However, a smaller group of firms had weaker characteristics and were approved after consideration of mitigating factors by regulators and Treasury. The ability to approve institutions after consideration of mitigating factors illustrates the importance of including controls in the review and selection process to provide reasonable assurance of the achievement of program goals and consistent decision making. While Treasury established such controls for applicants that regulators recommended for approval, Treasury’s process was inconsistent in the control mechanisms that existed for applicants that regulators recommended to withdraw from program consideration. These institutions did not benefit from the multiple levels of review that Treasury and regulators applied to approved applicants. For example, regulators could decide independently which applicants they would recommend to withdraw and may have considered mitigating factors differently. Treasury did not collect information on these firms or the reasons for regulators’ decisions. Without mechanisms such as those that exist for approved applicants to control for the risk of inconsistent evaluations across different regulators, Treasury cannot have reasonable assurance that all similar applicants were treated consistently or that some potentially eligible firms did not end up withdrawing after following the advice of their regulator. Treasury officials explained their desire to conduct adequate due diligence on all applicants recommended for approval, but as Treasury is the agency responsible for implementing CPP, understanding the reasons that regulators recommended applicants withdraw would have been equally beneficial for Treasury. Collecting and reviewing information on withdrawal requests would allow Treasury to determine whether applicants that were left out of CPP were evaluated consistently across different regulators and conformed to Treasury’s goals for the program. Although Treasury is no longer making investments in financial institutions through CPP, it may continue to use the process as a model for similar programs as it has for the CDCI program. One such program is the SBLF, which Congress authorized in September 2010. SBLF contains elements similar to those of CPP and requires Treasury to administer the program with bank regulators. Unless Treasury makes changes to the CPP model to include monitoring and reviews of withdrawal recommendations, these new programs may share the same increased risk of similar participants not being treated consistently that existed in CPP. As with the approval process, agencies are expected to establish control mechanisms to provide reasonable assurance that program goals are being achieved. Treasury has not established mechanisms to monitor, review, or coordinate regulators’ decisions on repayment requests because, in its view, it lacks the authority to do so and is limited to carrying out regulators’ decisions regarding the institution making the request. However, Treasury is not precluded from providing feedback to help ensure that regulators are treating similar institutions consistently when considering their repayment requests. Although regulators said that they consider similar factors when evaluating CPP firms’ repayment requests, without collecting information on how and why regulators made their decisions, Treasury cannot verify the degree to which regulators’ decisions on requests to exit CPP actually were based on such factors. If Treasury administers programs containing elements similar to those of CPP, such as the SBLF, we recommend that Treasury apply lessons learned from the implementation of CPP and enhance procedural controls for addressing the risk of inconsistency in regulators’ decisions on withdrawals. Specifically, we recommend that the Secretary of the Treasury direct the program office responsible for implementing SBLF to establish a process for collecting information from bank regulators on all applicants that withdraw from consideration in response to a regulator’s recommendation, including the reasons behind the recommendation. We also recommend that the program office evaluate the information to identify trends or patterns that may indicate whether similar applicants were treated inconsistently across different regulators and take action, if necessary, to help ensure a more consistent treatment. As part of its consultation with regulators on their decisions to allow institutions to repay their CPP investments to Treasury, and to improve monitoring of these decisions, we recommend that the Secretary of the Treasury direct OFS to periodically collect and review certain information from the bank regulators on the analysis and conclusions supporting their decisions on CPP repayment requests and provide feedback for the regulators’ consideration on the extent to which regulators are evaluating similar institutions consistently. We provided a full draft of this report to Treasury for its review and comment. We received written comments from the Assistant Secretary for Financial Stability. These comments are summarized below and reprinted in appendix III. In addition, we received technical comments on this draft from the Federal Reserve, FDIC, OCC, and Treasury, which we incorporated as appropriate. In its written comments, Treasury agreed to consider our recommendation to review information on applicants that regulators recommend to withdraw from program consideration if Treasury implements a similar program in the future. Treasury stated that the system used to evaluate CPP applicants balanced the objectives of ensuring consistent treatment for all applicants while also utilizing the independent judgment of federal banking regulators. Treasury suggested that ensuring regulators hold regular discussions about their standards could be an additional action to help ensure consistency in regulators’ reviews. As we note in the report, Treasury implemented multiple layers of review for approved institutions to enhance the consistency of the decision process. However, applicants that withdrew from consideration in response to a request from their regulator received no review by Treasury or other regulators. Although CPP is no longer making any new investments, the passage of the SBLF, which, according to Treasury officials, would also rely on regulators to determine applicants’ eligibility, presents an opportunity for Treasury to address this area of concern. We continue to believe that unless Treasury takes steps to monitor and provide feedback on regulators’ withdrawal requests, applicants that receive withdrawal recommendations under this new program may not be treated consistently and equitably. Treasury stated that our second recommendation—to review information on regulators’ decisions on repayment requests and provide feedback to regulators—also raises questions about how to balance the goals of consistency and respect for the independence of regulators. However, Treasury acknowledged the potential value of our recommendation and agreed to consider ways to address it in a manner consistent with these considerations. Specifically, Treasury noted that while it is prohibited from imposing standards for repayment as a result of statutory changes to its authority under EESA, it did help facilitate meetings among regulators to discuss when CPP participants would be allowed to repay their investments. Finally, Treasury explained that it does not receive confidential supervisory information about CPP participants on a regular basis, which could limit any information collection envisioned by our recommendation. However, as we noted in the report, the two regulators with responsibility for most CPP repayment requests document their analysis in a manner similar to what regulators provided to Treasury when recommending CPP applicants, but Treasury does not review this information. We are sending copies of this report to the Congressional Oversight Panel, Financial Stability Oversight Board, Special Inspector General for TARP, interested congressional committees and members, Treasury, the federal banking regulators, and others. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at williamso@gao.gov or (202) 512-8678. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The objectives of our report were to (1) describe the characteristics of financial institutions that received funding under the Capital Purchase Program (CPP), and (2) assess how the Department of the Treasury (Treasury), with the assistance of federal bank regulators, implemented CPP. To describe the characteristics of financial institutions that received CPP funding, we reviewed and analyzed information from Treasury case files on all of the 567 institutions that received CPP investments through April 30, 2009. We gathered information from the case files using a data collection survey that recorded our responses in a database. Multiple analysts reviewed the collected information, and we performed data quality control checks to verify its accuracy. We used the database to analyze the characteristics of CPP applicants including their supervisory examination ratings, financial performance ratios, and regulators’ assessments of their viability, among other things. We spoke with Treasury and regulatory officials about their processes for evaluating applicants, in particular about actions they took to collect up-to-date information on firms’ financial condition. We also collected and analyzed information from the records of the CPP Council and Investment Committee meetings to understand how the committees evaluated and recommended approval of CPP applicants. Additionally, we collected limited updated information on all CPP institutions approved through December 31, 2009—for example, their location, primary federal regulator, ownership type, and CPP investment amount—from Treasury’s Office of Financial Stability (OFS) and from publicly available reports on OFS’s Web site to present characteristics for all approved institutions. To describe how Treasury and regulators assessed firms with weaker characteristics, we collected information on the reasons regulators approved these firms and the concerns regulators raised about their eligibility from case files and records of committee meetings. To describe enforcement actions that regulators took against these institutions, we reviewed publicly available documents on formal enforcement actions from federal bank regulators’ Web sites. We also collected information on CPP firms that missed their dividend or interest payments or restructured their CPP investments from OFS and publicly available reports on its Web site. Finally, we collected information from the Federal Deposit Insurance Corporation (FDIC) on the number of CPP firms added to its list of problem banks. To assess how Treasury implemented CPP with the assistance of federal bank regulators, we reviewed Treasury’s policies, procedures, and guidance related to CPP, including nonpublic documents and publicly available material from the OFS Web site. We met with OFS officials to discuss how they evaluated applications and repayment requests and coordinated with regulators to decide on these applications and requests. We interviewed officials from FDIC, the Office of the Comptroller of the Currency (OCC), Office of Thrift Supervision (OTS), and the Board of Governors of the Federal Reserve System (Federal Reserve) to obtain information on their processes for reviewing and providing recommendations on CPP applications and repayment requests. We also discussed the guidance and communication they received from Treasury and their methods of formulating their CPP procedures. Additionally, we collected and analyzed program documents from the bank regulators, including policies and procedures, guidance documents, and summaries of their evaluations of applications and repayment requests. We also gathered data from regulators on applicants that withdrew from CPP consideration—including the reason for withdrawing—and on the number of repayment requests and their outcomes. We reviewed relevant laws, such as the Emergency Economic Stabilization Act of 2008 and the American Recovery and Reinvestment Act of 2009, to determine the impact of statutory changes to Treasury’s authority. To assess how Treasury and regulators documented their decisions to approve CPP applicants, we analyzed information from case files and CPP Council and Investment Committee meeting minutes to identify how consistently Treasury and regulators included relevant records of their reviews and decision-making processes. We also discussed with Treasury and regulatory officials the key forms they used to document their decisions and the evolution of these forms over time. To assess Treasury programs that were modeled after CPP, we collected and reviewed publicly available documents from Treasury and interviewed Treasury officials to discuss the nature of these programs—including the Community Development Capital Initiative (CDCI) and Small Business Lending Fund (SBLF)—and plans for implementing them. Finally, we met with the Federal Reserve’s Office of Inspector General to learn about its work examining the Federal Reserve’s CPP process and reviewed its report and other reports by GAO, the Special Inspector General for the Troubled Asset Relief Program (SIGTARP), and the FDIC Office of Inspector General. This report is part of our coordinated work with SIGTARP and the inspectors general of the federal banking agencies to oversee TARP and CPP. The offices of the inspectors general of FDIC, Federal Reserve, and Treasury and SIGTARP have all completed work or have work under way reviewing CPP’s implementation at their respective agencies. In coordination with the other oversight agencies and offices and to avoid duplication, we primarily focused our audit work (including our review of agency case files) on the phases of the CPP process from the point at which the regulators transmitted their recommendations to Treasury. We conducted this performance audit from May 2009 to September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In general, the time frame for the Department of the Treasury and regulators to complete the evaluation and funding process for Capital Purchase Program applicants increased based on three factors. First, smaller institutions had longer processing time frames than larger firms. The average number of days between a firm’s application date and the completion of the CPP investment increased steadily based on the firm’s size as measured by its risk-weighted assets. The smallest 25 percent of firms we reviewed had an average processing time of 100 days followed by 83 days for the next largest 25 percent of firms. The two largest quartiles of firms had average processing times of 72 days and 53 days respectively. Also, it took longer to complete the investment for smaller firms, as the average time between preliminary approval and disbursement increased as the institution size decreased. Second, private institutions took longer for Treasury and regulators to process than public firms. The average and median processing time frames from application through disbursement of funds was about 6 weeks longer for private firms than for public firms. As with the trend for smaller institutions, private institutions had longer average time frames between preliminary approval and disbursement. Third, when Treasury returned an application to regulators for additional review, it took an average of about 2 weeks to receive a response from regulators. On average, Treasury preliminarily approved these applicants after an additional 3 days of review. Firms that applied earlier had shorter average processing times—from application to disbursement—than firms that applied in later months. The average time from application through disbursement was 70 days for firms that applied in October, 82 days for firms that applied in November, and 89 for those that applied in December. Also, public firms tended to apply earlier than private firms and larger firms tended to apply earlier than smaller firms. For example, 62 percent of firms that applied in October were public, while 93 percent of firms that applied in December were private—a trend that largely resulted from the later release of program term sheets for the privately held banks. Likewise, 61 percent of firms that applied in October were the largest firms and 84 percent of firms that applied in December were the smallest firms. Because larger firms and public firms also had shorter average processing time frames than smaller and private firms, this may explain why firms that applied earlier had shorter processing times than those that applied later in the program. The overall process for most firms, from when they applied to when they received their CPP funds, took 2 1/2 months. There were many interim steps within this broad process that can shorten or lengthen the overall time frame. For example, in our June 2009 report on the status of Treasury’s implementation of the Troubled Asset Relief Program, we reported that the average processing days from application to submission to Treasury varied among the different regulators from 28 days to 57 days. Also, Treasury preliminarily approved most firms within 5 weeks from application. The Investment Committee approved most firms the same day it reviewed them; however, it generally took longer to approve firms with the lowest examination ratings, resulting in a longer average review time frame. As previously mentioned, firms that Treasury returned to regulators for additional review took longer to receive Treasury’s preliminary approval, and these firms tended to be those with lower examination ratings. Once Treasury preliminarily approved an applicant, it took an average of 33 days to complete the investment. As with the trends for the overall processing time frames, the final investment closing and disbursement took longer for smaller institutions and private institutions. Daniel Garcia-Diaz (Assistant Director), Kevin Averyt, William Bates, Richard Bulman, Emily Chalmers, William Chatlos, Rachel DeMarcus, M’Baye Diagne, Joe Hunter, Elizabeth Jimenez, Rob Lee, Matthew McDonald, Marc Molino, Bob Pollard, Steve Ruszczyk, and Maria Soriano made important contributions to this report.
Congress created the Troubled Asset Relief Program (TARP) to restore liquidity and stability in the financial system. The Department of the Treasury (Treasury), among other actions, established the Capital Purchase Program (CPP) as its primary initiative to accomplish these goals by making capital investments in eligible financial institutions. This report examines (1) the characteristics of financial institutions that received CPP funding and (2) how Treasury implemented CPP with the assistance of federal bank regulators. GAO analyzed data obtained from Treasury case files, reviewed program documents, and interviewed officials from Treasury and federal bank regulators.
The Convention is one of a number of cooperative efforts by the international community to improve nuclear safety worldwide and is meant to complement these other efforts. For example, as we previou sly reported, the United States and 20 other countries and internationa organizations contributed $1.9 billion to improve nuclear safety in countries operating Soviet-designed nuclear reactors. The United States alone has spent over $770 million since the Chernobyl accident on nuclear safety assistance to Russia, Ukraine, Kazakhstan, Armenia, and several other countries through DOE and NRC programs. According to an agency official, DOE’s nuclear safety assistance programs have focused on physical safety enhancements to Soviet-designed reactors, while NRC has worked to increase the capacity and stature of recipient countries’ regulatory bodies to ensure the continuing operational safety of such reactors. In addition, a separate fund was established to help stabilize the damaged reactor at Chernobyl by constructing a new containment structure. As we reported, the estimated cost of this effort was $1.2 billion as of 2007, of which the United States pledged $203 million. Since 1991 the EU has spent over $1.9 billion on international nuclear safety assistance. See appendix II for more information about U.S. and EU expenditures to promote international nuclear safety. These expenditures are not used to support the implementation of the Convention. Matters pertaining to U.S. financial support to the Convention are contained on page 28 of this report. In addition to the Convention, other multilateral organizations—t Nuclear Energy Agency (NEA), the Western European Nuclear Regulators’ Association (WENRA), the European Nuclear Safety Regulators Group (ENSREG), and the EU—are making efforts to advance the safety of civilian nuclear power. All member or obse rver countries of the NEA, WENRA, ENSREG, and the EU are also partie the Convention. The NEA, for example, has created several specialized committees to facilitate exchanges of technical information and to organize joint research projects to improve national safety practices. WENRA works to develop common approaches to nuclear safety among the chief nuclear regulators in Europe. ENSREG, among other things, aims to maintain and continuously improve the safety of nuclear installations in the EU. In June 2009, the EU adopted a directive creating a framework f or (1) maintaining and promoting the continuous improvement of nuclear safety and its regulation and (2) ensuring that EU member states prov high level of nuclear safety to protect workers and the public against radiation from nuclear installations. This framework is based in par IAEA safety documents and the obligations of the Convention. EU members are required to incorporate legislation by June 2011. the directive into their national Other conventions have been established to advance international nuclear safety and are administered by IAEA’s Department of Safety and Security. Two “emergency conventions” obligate parties to provide early notification of a nuclear accident and to render assistance in the event of such an accident or a radiological emergency, and two other conventions waste and to obligate parties to safely manage spent fuel and radioactive take effective action to physically protect nuclear material. Nearly all parties responding to our survey reported that the Convention has been very useful or somewhat useful in helping to strengthen nuclear safety both in their country and worldwide. In all, these parties operate 404—or more than 92 percent—of the world’s 437 operating civilian nuclear power reactors. In addition, we also interviewed representatives from IAEA member states, nuclear regulatory organizations, and the EU (17 in all) who expressed similar views about the Convention. Survey respondents and parties we interviewed identified several Convention obligations as having helped strengthen the safety of civilian nuclear power programs. The obligations cited most frequently were (1) establishing an effective legislative and regulatory framework (laws and regulations) and a strong, effective, and independent nuclear regulatory body and (2) preparing a national report every 3 years that describes the measures the country has taken to achieve the Convention’s safety goals. In addition, some of the 17 parties we interviewed stated that the Convention has contributed to and promoted the independence and effectiveness of their country’s nuclear regulatory bodies. For example, an Austrian nuclear regulator told us he thought that this promotion of effective regulatory capacity is one of the Convention’s greatest contributions to international nuclear safety. Moreover, representatives of China and Pakistan told us that the Convention was influential in leading their countries to increase the independence and effectiveness of their nuclear regulators. NRC officials expressed a similar view, noting that parties to the Convention have taken many steps to develop more effective laws and regulations and increase the capacities and independence of their nuclear regulators. The requirement to prepare a national report describing the steps parties have taken to meet the Convention’s nuclear safety obligations also plays a large role in strengthening the safety of civilian nuclear power programs, according to survey respondents. Almost all survey respondents indicated that the presentation of national reports in country groups was a very or somewhat effective way for sharing best safety practices. Most survey respondents reported that preparing the national report has either greatly or somewhat improved opportunities to examine their country’s civilian nuclear power program. A number of parties we interviewed also said that this national report has been helpful in strengthening nuclear safety worldwide. NRC officials told us one effect of a national report is that nuclear regulators and plant operators are forced to think about even routine safety procedures and policies because the reports will be scrutinized by their peers. For example, as a result of questions raised by other parties on the national report prepared for the 2008 review meeting, the United States agreed to discuss with state governments and NRC licensees the benefits and costs of adopting stricter standards for protecting nuclear power plant workers and the public from exposure to radiation. In our survey, we also asked some additional questions about parties’ perceptions about how the peer review process affected the preparation of the 2008 reports. Specifically, among other things, we asked how likely parties thought reports were to include (1) comprehensive, detailed descriptions of measures taken to strengthen safety; (2) evidence that safety issues discussed in one review meeting were revisited in the next meeting and that the actions taken to address the issues were discussed in sufficient detail for parties to evaluate whether the safety concerns had been adequately addressed; and (3) sufficient technical detail to understand specific safety concerns. In each case, most survey respondents indicated that they thought reports were very or somewhat likely to include such information. We also asked how effectively the peer review process encouraged parties to provide detailed information in their 2008 national reports. Overall, most survey respondents indicated that the peer review process was very or somewhat likely to encourage parties to include detailed, comprehensive, and accurate information in their national reports. According to both survey respondents and parties we interviewed, the Convention has increased communication and encouraged the sharing of technical information to improve nuclear safety worldwide. There was wide agreement among the survey respondents that the Convention has improved communication among nuclear regulators; nuclear power plant operators; and other national organizations involved in the civilian nuclear power industry, such as, in the case of the United States, the Institute of Nuclear Power Operations (INPO). More than half of the respondents to our survey indicated the Convention had “greatly” improved communication about safety issues affecting civilian nuclear power reactors. Most respondents to our survey agreed that the Convention had improved opportunities for sharing technical solutions to improve safety, such as reactor design improvements or fire safety enhancements. Russian and Ukrainian officials we spoke to provided examples of how the Convention has led to the sharing of nuclear safety information. Following are some examples: Russian nuclear regulatory officials told us that the Convention has played a useful role in promoting technical solutions to problems shared by countries operating similar types of reactors. Specifically, Russia and Finland have been developing a system to improve communication between their plant operators based on discussions that began with contacts made at Convention review meetings. A Ukrainian official told us his country’s participation in the Convention has increased other countries’ awareness of the safety problems confronting Ukraine’s aging Soviet-designed nuclear reactors. He further noted that the Convention is one of many forums that Ukraine participates in that supports the strengthening of nuclear safety. According to most parties we surveyed and interviewed, maintaining the confidentiality of information obtained during the Convention’s meetings is critical to the peer review process. Most party representatives we spoke with agree that confidentiality should be preserved. For example, when asked if the public should be allowed to directly observe review meetings—and thereby gain direct access to a party’s national report and any concerns or questions raised about it by other parties—approximately two-thirds of survey respondents said the public probably or definitely should not be given such access. Some parties we interviewed told us that, as a result of the confidentiality of the peer review process, their country’s national reports have become more comprehensive. Three-quarters of survey respondents indicated that the quality of national reports prepared for review meetings had improved in the past 10 years. While the parties’ perceptions of the value of the Convention are generally very positive, some concerns were raised about the lack of information provided to the general public about the Convention’s proceedings, some countries’ lack of resources to fully participate in the review meetings, and the absence of performance metrics. In addition, parties emphasize that without the participation of all countries with nuclear power programs in the Convention, the international community will have limited access and insight into countries’—such as Iran—civilian nuclear power programs. Notwithstanding the general agreement that preserving the confidentiality of the peer review process is important, most parties responding to our survey would like to see more public access to the results of review meetings. We have testified that, according to some experts familiar with international agreements that rely primarily on peer review, the public dissemination of information about parties’ progress in meeting the terms of the Convention can play a key role in influencing compliance with the Convention’s nuclear safety obligations. Currently, only summary information of the peer review meeting is released to the public. This summary provides a brief introduction containing background on the Convention, an overview of the review process, and a synopsis of what the parties agree were the most important points discussed at the meeting. For example, the public report on the fourth review meeting, which took place in 2008, briefly summarizes the discussions of the parties on many topics discussed at the meeting, including parties’ efforts to meet the challenges of maintaining adequate staffing and competence levels and ongoing concerns about the degree of independence of some parties’ regulatory bodies. Any further details about any party’s national report or questions and answers on the report remain confidential unless the party voluntarily releases it. French officials in particular have expressed an especially strong view regarding public access to information about the Convention’s proceedings. In July 2009, in written responses to our questions, French officials stated that parties to the Convention should consider making the opening and closing sessions of review meetings open to the media. Further, a Norwegian official we spoke with suggested that some nongovernmental organizations should be allowed to attend review meetings as observers. One way that some parties have attempted to increase public access to the Convention’s proceedings is by posting their national reports and answers to written questions received on their national reports to IAEA’s public Web site. While the number of parties to the Convention making their national reports available in this way has increased since the first review meeting was held in 1999, it has not increased significantly in several years and actually declined between the third review meeting in 2005 and the fourth review meeting in 2008. Specifically, 26 parties—about 43 percent of the 60 parties for whom the Convention had come into force by the due date for submitting the national report—posted their national report prepared for the 2008 review meeting. This was down from the 30 parties—or about 55 percent of parties to the Convention— posting reports prepared for the 2005 review meeting. In fact, eight countries that posted their national reports prepared for the 2005 review meeting— Argentina, Belgium, Bulgaria, Ireland, Japan, Latvia, the Slovak Republic, and South Korea—did not do so for the report prepared for the 2008 review meeting. However, three parties posted their national reports for the first time in 2008—Estonia and India, which had recently become parties to the Convention, and Pakistan, which became a party in the 1990s. Figure 1 shows the number of countries that posted their national reports to the IAEA public Web site for the four review meetings held thus far. Officials from NRC and State told us that the United States has always made its national report available on the Internet. However, the U.S. approach has been to lead by example rather than taking an active role in encouraging other parties to the Convention to post their national reports to the Internet. IAEA officials told us it was important for parties to make as much information about their civilian nuclear power programs accessible as possible, but that it was for each party to determine how much information should be made public and how much should remain confidential. In addition to its public Web site, IAEA also maintains a secure, members-only Web site where parties are encouraged to post their national reports. According to NRC officials, parties have improved their participation in posting their reports to this Web site. Parties posted 17, 22, 57, and 61 national reports in 1999, 2002, 2005, and 2008, respectively. The overwhelming majority of parties have never posted their answers to written questions about their nuclear power programs to the IAEA public Web site. The written questions and answers provide a great deal of information about each country’s nuclear power program. According to an IAEA official, over 4,000 questions were prepared for the 2008 review meeting, and almost all were answered. As figure 2 shows, 3 countries posted these questions and answers to the IAEA public Web site for the first review meeting in 1999. While 11 countries posted questions and their answers to the IAEA’s public Web site for the second review meeting, including the United States, 6 did so for the third review meeting, and 5 did so for the 2008 meeting. Only Slovenia and Switzerland—both nuclear power countries—have posted these questions and answers for all four meetings; the United Kingdom and Canada— the sixth and eighth largest nuclear power countries as measured by the number of operating reactors, respectively—have done so since 2002. The United States had not posted its answers to written questions received on its national report to IAEA’s public Web site since 2002, although NRC officials stated that they have always posted them to the NRC Web site. We also found that other nuclear power countries such as Finland, Germany, Japan, and Spain have not posted their answers to written questions to the IAEA’s public Web site since 2002, either. In 2008, Luxembourg became the first, and thus far only, nonnuclear party to post the answers to questions it received on its national report. Luxembourg’s responses focused primarily on how it would respond to a nuclear accident in a neighboring country. We met with NRC officials on March 15, 2010, to discuss an early draft of this report. At that time, we informed them that their answers to written questions on U.S. national reports were not available on IAEA’s public Web site. NRC officials acknowledged that these responses were not readily accessible and said they would take steps to post them. On March 17, 2010, NRC informed us of the availability of their responses, and we verified that they were now on IAEA’s public Web site. Some respondents to our survey reported the lack of resources to fully participate in the review meetings. Specifically, almost half of the survey respondents—ranging from parties with well-established civilian nuclear power programs to those with no nuclear power programs—report that a lack of resources has limited their country’s ability to develop their national report. As we noted in our March 1999 testimony, NRC officials anticipated this lack of staff resources and/or travel money could be a problem. We reported that NRC officials told us that, because of differences in parties’ nuclear safety programs and available resources, they anticipated unevenness in the quality and detail of some national reports. In addition, half of the parties responding to our survey reported that a lack of resources has limited their ability to attend review meetings, and more than three-quarters indicated that a lack of resources has inhibited their ability to send representatives to all of the country group meetings. According to NRC officials, this is important because the country groups meet simultaneously, and it is in these meetings where the national reports are presented and questions about them are addressed. Not being able to attend country group meetings reduces opportunities to learn from other parties’ nuclear safety experiences. In addition, NRC officials recently told us that since much of the peer review of national reports can occur in the 7 months before the review meeting, limited resources may reduce the ability of some parties to take full advantage of this opportunity. That is, according to NRC officials, some countries do not have the staff resources to devote to preparing for review meetings by reading national reports, formulating and submitting written questions, and reviewing the parties’ written responses to the written questions. The Convention does not include performance metrics to gauge its impact on improving safety. As a result, it provides no systematic way to measure where and how progress in improving safety in each country has been made. During the course of this review, we asked parties if the lack of performance metrics limited the usefulness of the Convention. Half the parties responding to our survey indicated that it did. Performance indicators and benchmarks are currently being used to track safety in civilian nuclear power plants that could be adapted to help countries enhance safety. For example, the World Association of Nuclear Operators (WANO) publishes quantitative indicators of nuclear plant performance for 11 key areas, including industrial safety accidents and unplanned automatic shutdowns of nuclear power plants. Although the Convention itself lacks performance metrics, one-quarter of parties responding to our survey reported that they themselves measure progress toward Convention goals using performance metrics—specifically, in some cases, by comparing their activities with the results of IAEA safety review missions to countries that request them and actions taken in response to questions and comments from other parties at Convention review meetings. Neither State nor NRC has formally proposed the adoption of performance metrics. However, NRC officials told us that performance metrics could play a useful role in helping parties measure their progress toward meeting safety obligations and that they could be introduced through a modification to the rules and procedures governing the Convention. Specifically, Article 22 of the Convention provides for the preparation of guidelines by the parties regarding the form and structure of their national reports. The guidelines can be revised by consensus at review meetings. The guidelines provide only suggestions for drafting the reports; parties remain free to structure their reports as they see fit. However, the suggestions provided are very detailed and touch upon more than just form and structure. For example, the guidelines provide detailed suggestions on the content of the national reports. They also contain an appendix detailing voluntary practices that parties are encouraged to engage in regarding the public availability of their national reports. The Convention is designed to maximize the number of countries that will participate in order to achieve its goal of promoting the safe operation of civilian nuclear power reactors worldwide; however, it is voluntary in nature. By and large, this approach has worked. Since 2009, three countries that are considering developing civilian nuclear power programs—Libya, Jordan, and the United Arab Emirates—have become parties to the Convention. Two others—Kazakhstan and Saudi Arabia— approved the Convention in 2010 and are expected to become parties to it later this year. An overwhelming majority of the parties we surveyed and interviewed said that all countries should be encouraged to join as soon as possible after making the decision to consider developing a nuclear power program. At present, all countries with such programs—except Iran—are parties to the Convention. Several parties we interviewed told us that Iran, which is on the verge of commissioning civilian nuclear power reactors, should ratify the Convention in order to benefit from the safety expertise that participation provides. In their view, without Iran’s participation in the Convention, the international community has limited or no insight on, or access to, how Iran is developing, operating, and maintaining its burgeoning civilian nuclear power program. Russian officials with whom we spoke agreed that greater international access to Iran’s civilian nuclear power program is needed and that the Convention could play a role in providing that access. Russia is helping Iran build the civilian nuclear power reactor at Bushehr, which is expected to be commissioned in the near future. Russian Ministry of Foreign Affairs officials told us that Russia’s continued assistance to Iran’s civilian nuclear program may be conditioned on Iran’s becoming a party to the Convention. The Convention does not require that unsafe reactors be closed down. As noted in our 1999 testimony, the Convention neither provides sanctions for noncompliance with any of its safety obligations nor does it require the closing of any unsafe nuclear reactors. However, more than 13 years after the Convention came into force, Russia continues to operate 11 Chernobyl-style RBMK reactors. These reactors pose the highest risk, according to Western safety experts, because of their inherent design deficiencies, including their lack of a containment structure. The containment structure, generally a steel-lined concrete dome, serves as the ultimate barrier to the release of radioactive material in the event of a severe accident. Russian nuclear regulators told us that adequate safety upgrades have been made to all 11 RBMK reactors and that they will continue to operate for the foreseeable future. We also discussed the matter of shutdown of Soviet-designed reactors with EU officials, who told us that the Convention was never intended to be a mechanism for closing unsafe Soviet-designed reactors. The European Union has used a different strategy to accomplish shutdown of the unsafe nuclear reactors in its member countries: making EU membership contingent upon the closure of these reactors. As a result, all eight RBMK and first-generation VVER 440 Model 230 reactors in Bulgaria, Lithuania, and Slovakia have been permanently shut down in order for these countries to obtain EU membership. According to NRC officials, as is the case in other international law on reactor safety, under the Convention each country is responsible for regulating the safety of its own reactors. In addition, NRC noted that the Convention relies on the peer review process, that it cannot obligate countries to comply with safety standards, and that it does not provide for sanctions such as the closing of any unsafe nuclear power plants. State expressed a similar view. State pointed out that the Convention was never meant to have the authority to require that unsafe reactors be shut down. According to State, it is the position of IAEA and its member states that each country operating nuclear power plants should have its own nuclear regulatory agency that would have the authority to shut down plants. The parties to the Convention generally agree that it would be difficult to amend the Convention. Consequently, several parties have taken the lead in making changes to the Convention’s rules and procedures. To date, some steps have been taken to improve the Convention’s peer review process, and parties are considering several additional proposals. Several parties have focused on improving the workings of the Convention’s peer review process. The most significant change they have made, in our view, is to allow the parties to more freely ask questions about each others’ national reports. NRC expressed concern in our January 1997 report about the rules governing how parties’ country group assignments affect the parties’ ability to discuss and seek clarification about other parties’ national reports at review meetings. According to NRC officials, in the past, parties assigned to a particular country group could ask questions about other parties’ nuclear programs that were assigned to that group during the question-and-answer session following the presentation of a national report. However, parties that were not assigned to that country group could not ask questions unless they submitted a written question several months in advance of the review meeting. This restrictive practice began to change during the 2005 review meeting, when at least one country group allowed parties that were not assigned to it to ask questions. At the next review meeting in 2008, according to NRC officials who attended both meetings, no restrictions were placed on any parties’ ability to ask questions about the national reports of any other parties. An NRC official told us that this change has made the process more open and accessible to all of the parties. Another notable change to the rules and procedures of the peer review process is the recent decision to move up the date for the organizational meeting and the selection of officers for the upcoming review meeting by almost a year and to advance by a few weeks the deadlines for submitting national reports and written questions for the peer review process. The purposes of the organizational meeting, among other things, are to elect the officers for the upcoming review meeting, adopt a provisional agenda for the meeting, assign parties to particular country groups, and identify which proposals for enhancing the peer review process should be considered at the upcoming meeting. Previously, organizational meetings were held about 7 months before the upcoming review meeting. However, the parties at the 2008 review meeting agreed to hold the organizational meeting for the 2011 review meeting in September 2009—19 months in advance. According to NRC officials, the purpose of the scheduling change was to put officers in place earlier to give them more time to plan for the next meeting and to promote greater continuity from one meeting to the next. Moving up the deadlines for submitting national reports and written questions for peer review is intended to give countries more time to both review the national reports of other parties and answer any written questions submitted. Additional proposals to improve the implementation of the Convention are currently under consideration by the parties. Specifically, these proposals include (1) allocating more country group meeting time to discuss, among other things, the national reports of countries with emerging nuclear programs; (2) expediting the process for calling a special meeting between review meetings to discuss urgent safety issues; and (3) changing the process for assigning parties to country groups. Some parties have suggested the peer review process might be more effective if more review meeting time were allocated to discussing the national reports of countries with emerging nuclear power programs or topics of general concern and less time presenting and discussing the national reports of parties with well-established nuclear programs. For example, according to NRC officials, the United Arab Emirates, which has only recently become a party to the Convention, is rapidly moving to establish its nuclear regulatory infrastructure and is soon to begin construction of several nuclear power reactors. Because its civilian nuclear power program is so new, the United Arab Emirates could benefit from more time to present its national report during the peer review process. NRC officials told us that the United States, in contrast, does not need as much time as it is allocated to present its national report. Similarly, according to a senior NRC official, the United States has proposed that more time at review meetings might also be allocated to discuss topics of general concern—such as the safety challenges of dealing with aging reactors or the challenges parties face in maintaining adequate staffing and competence levels in both the regulatory bodies and at nuclear power plants. Another proposal to be considered would create a more efficient process for calling a meeting to discuss topical or urgent nuclear safety issues that parties feel cannot wait until the next review meeting. Currently, in order to have such a meeting, a majority of parties are required to support the call for a meeting. One way of streamlining this process, according to an NRC official, would be to empower the officers elected for the most recent or upcoming review meeting to call a special meeting. An urgent issue might be, for example, a nuclear power plant accident. If such an accident occurred, parties might wish to convene a special meeting to discuss the causes of the accident and what might be done to avoid a similar accident. Finally, to promote greater variation in the composition of country groups from meeting to meeting, amending the method for assigning countries to the six country groups is being considered. Specifically, the experience of the first four review meetings has been that the country groups hav e remained relatively static—that is, there has been little variation in the membership of each group among the nuclear power countries. According to NRC officials, it would be useful if the composition of the groups were more varied from meeting to meeting. While each group would still be anchored by a country with a large number of operating civilian nuclear power reactors, the remainder of the group would consist of a more varied mix of countries. This type of mix would provide greater opportunities for more information sharing among a more diverse group of countries. An NRC official told us that many parties are generally in favor of some adjustment to the existing process but that there is not yet sufficient agreement on how to accomplish this change. IAEA has a long history of serving as a technical advisor to member states to promote the safe operation of nuclear power plants. Although this role predates the establishment of the Convention, and regulating nuclear safety is a national responsibility, the Convention complements the role the agency plays in these matters. IAEA promotes the Convention’s nuclear safety goals and objectives largely through its Technical Cooperation (TC) Program, safety standards, and peer review missions, which together help countries improve their nuclear regulatory bodies and the safety performance of their civilian nuclear power plants. Most survey respondents reported that they found IAEA effective in serving as a technical advisor. In addition, almost all parties responding to our survey consider IAEA to be effective in its role as secretariat to the Convention. IAEA provides assistance to its member states to promote peaceful uses of nuclear energy in several ways, including providing technical cooperation, establishing safety standards, and conducting advisory and peer review missions. The importance of its role in providing this type of assistance was corroborated by our survey results. A majority of survey respondents reported that IAEA was either very effective or somewhat effective in serving as a technical advisor to countries requesting assistance to improve civilian nuclear power safety. IAEA’s TC program supports, among other things, nuclear safety and the development of nuclear power. For the 2009-2011 activities under the TC program, nuclear safety remains one of the top three priorities for IAEA member states. IAEA currently conducts 551 TC projects in 115 countries and territories, and program activities are tailored to the needs of each region. Specific TC projects have included activities to extend the operating life of nuclear power plants and establishing safety culture in nuclear facilities. TC projects that support member states considering or developing nuclear power also include strengthening nuclear regulatory authorities and preparing an emergency plan for a nuclear power plant. In 2007, IAEA disbursed approximately $5.6 million to support the safety of civilian nuclear installations worldwide through the TC program. In addition to its TC program budget, IAEA plans to spend approximately $15.1 million in 2010 on other efforts to promote nuclear safety, such as strengthening countries’ abilities to respond to nuclear incidents and emergencies and to assess the safety of the siting and design of nuclear installations. The role and importance of IAEA in promoting nuclear safety will likely grow if the cost of fossil fuels and the threat of climate change spur a nuclear renaissance, as an independent commission assessing the role of IAEA to 2020 and beyond reported recently. According to this independent commission, this growing role may involve (1) leading an international effort to establish a global nuclear safety network, (2) helping countries with emerging nuclear power programs put in place the infrastructure needed to develop nuclear energy safely, and (3) ensuring that critical safety knowledge is widely shared among IAEA member states. In addition, IAEA has established safety standards that provide a framework for fundamental safety principles, requirements, and guidance for member states. The standards, which reflect international consensus, cover a wide range of topics, including nuclear power plant design and operation, site evaluation, and emergency preparedness and response. Committees of senior experts from IAEA member states use an open and transparent process to develop the standards and any subsequent revisions. The guidelines governing the drafting of national reports state that IAEA safety standards can give valuable guidance on how to meet the Convention’s safety obligations. IAEA also promotes nuclear safety through advisory and voluntary peer review missions—the most prominent are Integrated Regulatory Review Service (IRRS) missions and Operational Safety Review Team (OSART) missions. These missions evaluate the operations of a member state’s nuclear regulatory system and civilian nuclear power plant operational safety, respectively. IRRS missions assess the safety practices of the requesting country through an examination of its regulatory framework and organization and compare the country’s practices with IAEA safety standards. Since 1992, IAEA has conducted 44 IRRS missions in 26 countries, with 15 of these missions taking place in countries that have operated—and in some cases continue to operate—Soviet-designed reactors. Table 1 shows the number of IRRS missions that member countries had hosted through 2009. The United States has sent approximately 20 experts on IRRS missions and has agreed to host an IRRS mission in October 2010. Some parties that responded to our survey reported that they found IRRS and OSART missions effective at improving civilian nuclear power safety. In addition, according to the summary report of the Convention’s fourth meeting in 2008, many parties reported that they had positive experiences with IRRS and OSART missions, and parties who had not already hosted one of these missions were encouraged to do so. In February and March 2010, IAEA conducted an IRRS mission to Iran, which included a site visit to the nearly completed Bushehr nuclear power plant. IAEA recommended, among other things, that Iran join the Convention. According to a senior Swedish official who was involved in drafting the Convention, these missions are increasingly being used to measure the safety standards of parties to the Convention. Parties face peer pressure to submit to these voluntary missions, as they provide a way for a country to show its commitment to enhancing safety. For example, ENSREG has promoted the use of IRRS missions by EU countries. Describing the missions as “well established and well respected,” ENSREG has encouraged all EU member states to participate in one to obtain advice on improvements and to learn from the best practices of others. IAEA also manages the OSART missions through which teams of experts drawn from IAEA member countries—including the United States, which has sent over 100 experts on missions—review operational safety at specific nuclear power plants. IAEA has conducted over 150 OSART missions in 32 countries since 1983, and has 9 more scheduled through the end of 2011. Table 2 shows the number of OSART missions that member countries had hosted through 2009. As table 2 shows, the 2 countries that have hosted the most OSART missions are France and Ukraine, 21 and 14, respectively. Combined, those 2 countries have 73 reactors. China and the Czech Republic have hosted the second most missions, 10 and 8, respectively. These countries have a combined total of 17 operating reactors. Japan, which has 54 reactors, has hosted 5 OSART missions. Russia, which has 32 operating reactors, has hosted 6, and the United States, which has 104 operating reactors, has also hosted 6 missions. The only countries with operating civilian nuclear power programs that have not hosted OSART missions are Armenia and India, which operate 1 and 18 reactors, respectively. While recommendations that result from safety review services such as IRRS and OSART missions are not mandates, IAEA officials told us that the agency nevertheless sees a high rate of implementation of those recommendations. IAEA also makes available on its public Web site a compilation of best practices learned from recent OSART missions, as well as the mission reports as authorized by the member states. This compilation serves to help member states improve the operational safety of their power plants and includes emergency plans and preparedness, training, and maintenance. Finally, IAEA also promotes civilian nuclear safety through other means. For example, IAEA offers additional review services to member states by focusing on issues such as siting, seismic safety, research reactor safety, fuel cycle facilities’ safety, power plant accident management, and safety culture assessments. IAEA also promotes education and training in nuclear safety through Web-based courses, electronic textbooks, and workshops. This training covers topics such as basic safety concepts, regulatory control of nuclear power plants, and instruction on IAEA safety standards. Much of this information is available to the public to download from IAEA’s Web site. One survey respondent from Eastern Europe commented that the training courses and workshops had contributed significantly to the promotion of high safety standards and best practices. Moreover, IAEA regularly holds conferences and symposia on issues related to nuclear safety, with some event summaries available online. Recent topics have included promoting safety education and training for countries with new or expanding nuclear programs, ensuring safety for sustainable nuclear development, and managing nuclear power plant life. Almost all parties responding to our survey and parties we interviewed reported that IAEA effectively carries out its role as secretariat as outlined in the Convention. In this capacity, IAEA hosts the review meetings in Vienna, Austria; prepares documents; and provides translation and interpretation services. There was widespread agreement among the respondents that the agency is effective in convening, preparing, and servicing the meetings and at transmitting information received or prepared in accordance with the provisions of the Convention. Some survey respondents and parties we interviewed called for more IAEA support during the Convention’s review meetings in such areas as more translation services for all country group sessions and more administrative assistance for parties to the Convention. The Convention permits IAEA to provide other services in support of the review meetings, if the parties reach consensus. Finally, some survey respondents reported that IAEA should play a more active role in the following areas: helping prepare national reports, providing other assistance to help prepare for the next review meeting, providing other technical support to improve safety, and helping address concerns about a country’s civilian nuclear power program. IAEA estimates its costs to support the last review meeting in 2008 at nearly $118,000 and expects to spend approximately $130,000 for the fifth review meeting scheduled for April 2011. The costs associated with the review meetings are modest for the U.S. government as well. NRC and State spent approximately $725,000 preparing for and participating in the 2008 review meeting and estimate they will spend $825,000 for the next review meeting. The Convention plays an important role in strengthening nuclear safety and enjoys broad support among the parties we surveyed and interviewed. Support for the Convention continues to grow as evidenced by the increasing number of countries that have joined it, particularly those with emerging nuclear programs, such as the United Arab Emirates. Many parties to the Convention told us that all countries that are considering embarking on a nuclear power program—or currently operating civilian nuclear power reactors— should be encouraged to join the Convention, including Iran. We are encouraged that the parties have taken steps to improve the Convention’s peer review process. However, the Convention does not require parties to include performance metrics in their national reports, which makes it difficult to gauge its impact on improving nuclear safety. Without such metrics there is no systematic way to measure where and how progress has been made in improving safety in each country that operates civilian nuclear power reactors. In addition, more than half of the survey respondents reported that the lack of metrics hampers the Convention’s usefulness, and NRC has noted that it would be feasible to add performance metrics into the guidelines that implement that national report process called for by the Convention. There are already international organizations that use such indicators to track nuclear safety improvements and which could perhaps be incorporated into the guidelines as voluntary practices that parties are encouraged to implement. Further, public awareness about parties’ progress toward meeting the terms of the Convention can play a key role in influencing compliance with the Convention’s nuclear safety obligations. However, to date the public has had limited access to parties’ national reports and written answers to questions about their nuclear power programs. More than half of the national reports prepared for the 2008 review meeting are not posted to IAEA’s public Web site, and even fewer parties make their answers to written questions received on their national reports available on IAEA’s public Web site. Putting this information on the Web site could increase public awareness of the nuclear safety issues facing countries and how they are addressing them. To further enhance the usefulness of the Convention in promoting the safety of civilian nuclear power programs worldwide, we recommend that the Secretary of State, in coordination with the Chairman of the Nuclear Regulatory Commission, work with other parties to the Convention to take the following three actions: Encourage parties to include performance metrics in national reports to better track safety in civilian nuclear power plants and help countries more systematically measure where and how they have made progress in improving safety. Expand efforts to increase the number of parties’ national reports made available to the public by posting them to IAEA’s public Web site. Promote greater public dissemination of parties’ written answers to questions about their nuclear power programs by posting this information to IAEA’s public Web site. We provided a draft of this report to NRC and State for comment. We also provided IAEA with a detailed summary of facts contained in the draft report. State and NRC provided written comments on the draft report, which are presented in appendixes IV and V, respectively. IAEA, State, and NRC also provided technical comments, which we incorporated as appropriate. NRC generally agreed with our report but did not specifically agree or disagree with the report’s recommendations, and State generally agreed with the recommendations to (1) encourage parties to the Convention to include performance metrics in their national reports to better track safety in civilian nuclear power plants, (2) increase the number of parties’ national reports made available to the public by posting them to IAEA’s public Web site, and (3) promote greater public dissemination of parties’ written answers to questions about their nuclear power programs by posting this information to IAEA’s public Web site. In its written comments, however, State provided some clarifications concerning the recommendations. First, State noted that it might be difficult to achieve metrics that would be meaningful across so many countries’ nuclear power programs and to agree on the specific metrics to be used. Second, State noted that initiatives to increase public access to information would run counter to strong concerns regarding confidentiality of information on civilian nuclear power plants held by many parties. In addition, State asserted that the report somewhat mischaracterizes the Convention by noting that the Convention does not require that unsafe reactors be shut down. State noted that the Convention was never meant to have that authority, which would be contrary to IAEA practice and policy. It is the position of IAEA and member states that each country operating nuclear power plants should have its own national regulatory agency that would have the authority to shut down plants. Regarding the first point, while it might be challenging to establish a common set of performance metrics, we believe there are already examples of standard metrics being used, such as those published by WANO. We believe that WANO’s metrics, for instance, could be used as a benchmark for parties to follow in measuring safety progress when developing their national reports. With regard to encouraging public dissemination of information about the Convention, we agree that maintaining confidentiality of sensitive information about what is discussed among the parties during the peer review process should be maintained. However, we also believe that increasing public awareness of the Convention’s proceedings—even on an incremental basis—through the posting of national reports to IAEA’s public Web site is a worthwhile goal and should be encouraged to the extent practicable. Finally, with respect to the issue of unsafe reactors, we have not mischaracterized the Convention. Rather, we pointed out in the report—as we have previously reported—that the Convention does not require the closing of any unsafe nuclear reactors. We also noted in this report that nuclear safety is a national responsibility and have not suggested or implied that the Convention is flawed because it does not require unsafe reactors to be closed. The fact remains, however, that Russia, which has ratified the Convention, continues to operate numerous nuclear power plants that pose a safety risk according to Western safety experts. However, based on State’s comments, we have clarified the text regarding this issue. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of State, the Chairman of the Nuclear Regulatory Commission, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Parties to the Convention on Nuclear Safety 11 November 2009 437 , 2010. By the terms of the Convention, it will enter into force for Saudi Arabia 90 days after the date of deposit of the instrument of accession. Table 3 reflects the cumulative amount of nuclear reactor safety assistance funds provided by the Department of Energy (DOE) from the inception of these programs. Table 4 reflects the cumulative amount of nuclear reactor safety assistance funds provided by the Nuclear Regulatory Commission (NRC) from the inception of these programs. Table 5 reflects nuclear safety expenditures from the European Union’s Technical Assistance to the Commonwealth of Independent States program. The objectives of our review were to evaluate the extent to which the Convention on Nuclear Safety is achieving its primary goal: promoting the safe operation of civilian nuclear power reactors worldwide. Specifically, we assessed (1) parties’ views on the perceived benefits and limitations of the Convention; (2) efforts to improve the implementation of the Convention; and (3) how International Atomic Energy Agency (IAEA) programs complement the Convention’s safety goals and objectives. In addition, we are providing information in appendix II about funding provided by the United States and the EU to promote international nuclear safety since the early 1990s. To assess parties’ views of the perceived benefits and limitations of the Convention and efforts to improve implementation, we (1) interviewed representatives of 17 nuclear and nonnuclear parties to the Convention as well as officials from NRC and State responsible for representing the United States at the Convention; (2) analyzed various Convention-related documents from NRC, State, IAEA, and EU; and (3) conducted a Web- based survey of 64 parties to the Convention. To encourage honest and open responses to our survey, we pledged member countries confidentiality in their responses and indicated that we would report only aggregate information or examples that would not identify a particular party. The survey included questions about the usefulness of the Convention, the effectiveness of Convention activities, and the role of IAEA in the Convention. To develop the survey questions, we analyzed the text of the Convention itself, as well as related rules and procedures. We also interviewed parties to the Convention and other experts to identify issues related to the Convention. Finally, we reviewed previous GAO reports to identify past issues and concerns related to the Convention and developed survey questions to gauge whether these issues were still relevant. The survey was pretested to ensure that (1) the questions were clear and unambiguous, especially to nonnative English-speaking respondents; (2) the terms we used were precise; (3) the survey did not place an undue burden on the officials completing it; and (4) the survey was independent and unbiased. In addition, the survey was reviewed by an independent, internal survey expert and by NRC. The survey was conducted using self-administered electronic questionnaires posted on the World Wide Web. We sent e-mail notifications to 64 parties to the Convention to alert them that we were conducting the survey and would be sending them log-in information in a separate e-mail. We also e-mailed each potential respondent a unique password and username to ensure that only members of the target population could participate in the survey. To encourage respondents to complete the survey, we sent an e-mail reminder to each nonrespondent about 2 weeks after our initial e-mail message. We also sent an additional e-mail reminder that extended the deadline to complete the survey. In addition to these e-mails, we also conducted extensive telephone and personalized e-mail follow-up to encourage those parties who contacted us with questions about the survey and to encourage the nonrespondents from the 17 parties whose representatives we interviewed to complete the survey. The survey data were collected from October 2009 through December 2009. Half (32) of the 64 parties to the Convention responded to the survey. To assess the potential for nonresponse bias in our survey results, we compared selected characteristics of nonresponding countries, such as (1) length of time as a party to the Convention, (2) nuclear power status and number of nuclear power plants, (3) region, (4) former Soviet bloc alignment, and (5) EU membership, to those of the responding parties. The distribution of these characteristics among responding and nonresponding parties was well-balanced. For example, 3 of the 32 respondents have been parties to the Convention for 2 years or less, 2 respondents for 3 to 9 years, and 27 respondents for 10 or more years. In addition, we also received responses from 13 nonnuclear countries and 19 nuclear countries and 17 EU-member countries and 15 nonmember countries. To eliminate data-processing errors, we independently verified the computer program that generated the survey results. This report does not contain all the results from the survey; the survey and a more complete tabulation of the results are provided in an electronic supplement to this report (this supplement can be viewed online at GAO-10-550SP). To assess how IAEA programs complement the Convention’s safety goals and objectives, we analyzed budget and other relevant documents from the Convention, such as meeting minutes and rules of procedure. We also interviewed IAEA officials; U.S. officials at the U.S. Missions in Vienna and Brussels; and the representatives of 17 parties to the Convention in Vienna, Brussels, Moscow, and Washington, D.C. To determine the amount of money the United States has spent promoting nuclear safety from the early 1990s through September 30, 2009, we obtained expenditure information from DOE and NRC. To assess the reliability of the information provided, we interviewed knowledgeable officials from each agency to understand (1) how they had developed the estimates and (2) what supporting documentation had been used to develop them; we determined the information provided was sufficiently reliable for our purposes. To determine the amount of money the EU has spent promoting nuclear safety from 1991 through 2006, and the amount they have budgeted to spend from 2007 to 2013, we obtained budget information from EU officials. However, the reliability of these EU estimates is undetermined because we did not receive responses to our data reliability questions. Given these limitations, we characterize these costs as estimates, and we use them only as background. Because the EU budget information was provided in euros, we converted the original values to dollars. In all instances, when converting euros to dollars, we used nominal and purchasing power parity average annual exchange rates from the Organization for Economic Cooperation and Development. When converting euro values for future projections into dollars, we used the latest available annual exchange rate. In addition, to determine the amount of money IAEA has budgeted for nuclear safety in 2010, we obtained information from the agency’s Programme and Budget for 2010-11. These IAEA budget figures—which we converted to dollars from euros—are also of undetermined reliability because we were unable to obtain sufficient detail about how they developed the estimates or the data sources that supported them. To determine the cost to the United States to participate in the Convention, and IAEA’s costs to support the Convention for one 3-year cycle, we obtained expenditure information from NRC, State, and IAEA. To assess the reliability of this information, we also interviewed knowledgeable officials from each agency to understand (1) how they had developed the estimates and (2) what supporting documentation had been used to develop them. We determined the information provided by NRC was sufficiently reliable for our purposes. However, the reliability of the State and IAEA information is undetermined. The reliability of State estimates are unknown because staff typically combined work and travel related to the Convention with other work duties, so it is not possible to accurately determine the amount of money spent exclusively on Convention participation. IAEA estimates—which we converted to dollars from euros—are of undetermined reliability because they do not formally track costs to run the review meetings. We conducted this performance audit from February 2009 to April 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Glen Levis, Assistant Director; Dr. Timothy Persons, Chief Scientist; Antoinette Capaccio; Frederick Childers; Nancy Crothers; Bridget Grimes; Kirsten Lauber; Rebecca Shea; and Kevin Tarmann made key contributions to this report.
Currently, 437 civilian nuclear power reactors are operating in 29 countries, and 56 more are under construction. After the Chernobyl accident, representatives of over 50 nations, including the United States, participated in the development of the Convention on Nuclear Safety, a treaty that seeks to promote the safety of civilian nuclear power reactors. The Convention has been in force since 1996. GAO was asked to assess (1) parties' views on the benefits and limitations of the Convention, (2) efforts to improve implementation of the Convention, and (3) how International Atomic Energy Agency (IAEA) programs complement the Convention's safety goals. GAO surveyed the 64 parties to the Convention for which it was in force at the time of GAO's review and analyzed the responses of the 32 that completed it, analyzed relevant documents, and interviewed U.S. and foreign officials. The Convention on Nuclear Safety plays a useful role in strengthening the safety of civilian nuclear power reactors worldwide, according to most parties to the Convention that responded to GAO's survey and representatives of parties GAO interviewed. In particular, parties indicated that the Convention's obligations to (1) establish effective legislative and regulatory frameworks and strong, independent nuclear regulatory bodies and (2) prepare a national report every 3 years that describes the measures the country has taken to achieve the Convention's nuclear safety goals, are among its most useful contributions. The countries present their national reports at review meetings, address questions that may arise about the reports, and assess and ask questions about the reports of other parties. This is known as the peer review process. Some concerns were raised about limited public access to Convention proceedings, some countries' lack of resources to fully participate in the review meetings, and the absence of performance metrics in the national reports to gauge progress toward meeting safety goals and objectives. Half of the parties responding to GAO's survey stated that the lack of performance metrics limited the usefulness of the Convention. Neither the Department of State nor the Nuclear Regulatory Commission (NRC) has formally proposed the adoption of performance metrics. However, NRC officials told GAO that performance metrics could be useful. In addition, the number of parties posting their national reports to IAEA's public Web site has declined since 2005. NRC and Department of State officials told GAO that the United States has always made its national report available on the Internet. However, the U.S. approach has been to lead by example rather than taking an active role in encouraging other parties to post their reports. Further, universal participation would advance achievement of the Convention's goals. Several representatives from countries who are parties to the Convention told GAO that Iran should ratify the Convention. In their view, without Iran's participation, the international community has limited or no insight on, or access to, Iran's civilian nuclear power program. Russia, which is helping Iran build the nuclear reactor at Bushehr, may condition continued assistance on Iran becoming a party to the Convention, according to Russian officials. The parties have taken some actions to improve the Convention's implementation, and more proposals are being considered. Steps have been taken to make the process for asking questions during peer review meetings more open and to increase the amount of time available for preparing for the review meetings. IAEA nuclear safety programs, which predate the Convention, complement the Convention's safety goals through the Technical Cooperation program, safety standards, and peer review missions. The Technical Cooperation program supports, among other things, the development of nuclear power. IAEA has established nuclear safety standards and also promotes nuclear safety through peer review missions that evaluate the operations of a member state's nuclear regulatory system and nuclear power plant operational safety
The U.S. private pension system is voluntary; employers decide whether to establish a retirement plan and determine the design, terms, and features of the plan or plans they choose to sponsor. The federal government encourages employers to sponsor and maintain private pension plans for their employees and provides tax incentives offered under the Internal Revenue Code to those who do. Although there is a wide range of specific plan designs that are permissible under current law, private sector pension plans are classified either as defined benefit or defined contribution plans. Defined benefit plans promise to provide, generally, a level of monthly retirement income that is based on salary, years of service, and age at retirement. The benefits from defined contribution plans are based on the contributions to and investment returns on individual accounts. Most private sector pension plans are defined contribution plans and this has been true for a number of years. Since the late 1980s, the number of defined benefit plans has decreased, and most new pension plans have been defined contribution plans. Many employers, particularly those with more than 1,000 employees, sponsor both defined benefit and defined contribution plans. More workers are covered by defined contribution plans than defined benefit plans, and the assets held by defined contribution plans now exceeds those held by defined benefit plans. According to DOL, employers sponsored over 673,000 defined contribution plans as of 1998 compared with about 56,400 defined benefit plans. Defined contribution plans had about 58 million participants while defined benefit plans had about 42 million participants. Defined contribution plans are central to the debate about employee stock ownership through employer-sponsored plans. Defined contribution plans include thrift savings plans, profit-sharing plans, and ESOPs. The most dominant and fastest growing defined contribution plans are 401(k) type plans, which are plans that allow employees to choose to contribute a portion of their pre-tax compensation to the plan under section 401(k) of the Internal Revenue Code. Most 401(k) plans are participant-directed, meaning that participants make investment decisions about their own retirement plan contributions within a set of investment choices selected by the plan sponsor. Employees are usually able to choose from a menu of diversified fund options when investing their own contributions. Over the last 20 years, employers have gradually expanded the investment choices of participants such that most plans are offering over 10 investment choices for participants, including investing in employer stock. Employees generally have less flexibility over the investments of the employer contributions to these plans, which frequently take the form of company stock. Many employers combine defined contribution plans with a 401(k) feature and ESOPs or profit-sharing/thrift savings plans with a 401(k) feature.High concentrations of employer securities are likely to be found when ESOPs and 401(k) type plans are linked or when 401(k) plans and profit- sharing plans are linked. This is especially true when plans are combined with ESOPs, which by definition seek to provide for employee ownership. Moreover, under current law, ESOPs may require participants not to divest their employer stock holdings until they reach the age of 55 or 10 years of service, essentially restricting participant’s rights to diversify employer stock holdings. ERISA has a rule that places a 10 percent limitation on acquiring and holding employer securities and employer real property for defined benefit plans. The 10 percent limitation states that a plan may not acquire any qualified employer securities or real property if immediately after the acquisition the aggregate fair market value of such assets exceeds 10 percent of the fair market value of the plan’s total assets. Employer securities and real property that appreciate in value after acquisition to 10 percent or more of total plan assets do not have to be sold. Defined contribution plans other than 401(k) type plans that are not ESOPs are generally exempt from the 10 percent limitation. Under ERISA, the Internal Revenue Service (IRS) and DOL’s PWBA are primarily responsible for enforcing laws related to private pension plans. PWBA enforces ERISA’s reporting and disclosure provisions and fiduciary standards, which concern how plans should operate in the best interest of participants. The IRS enforces requirements concerning how employees become eligible to participate in benefit plans and earn rights to benefits. The IRS also enforces funding requirements designed to ensure that plans subject to such requirements have sufficient assets to pay promised benefits. In addition to the types of plans employers provide, some employer- sponsored plans have complex designs, such as floor-offset arrangements. Such arrangements consist of separate, but associated defined benefit and defined contribution plans. The benefits accrued under one plan offset the benefit payable from the other. In 1987, Congress limited the use of such plans significantly invested in employer securities. However, plans in existence when the provision was enacted were grandfathered. Because plan participants are investing in employer securities, securities law investor protection and disclosure requirements are also important. Congress enacted the Securities Act of 1933 and the Securities Exchange Act of 1934 in response to fraud in the securities markets and because of a perceived lack of public information in the stock markets. The 1940 Investment Company Act, combined with the 1933 act, is the basis for SEC regulation of investment companies. Companies meeting this description must register under the Investment Company Act of 1940 and offer their shares under the Securities Act of 1933. These laws seek to ensure vigorous market competition by mandating full and fair disclosure and prohibiting fraud. Under these acts, a primary mission of the SEC is to protect investors and maintain the integrity of the securities market through extensive disclosure, enforcement, and education, but the securities laws also presume individual responsibility for investment decisions. About 550 of the Fortune 1,000 firms in 1998 held employer securities in their defined contribution or defined benefit plans. Such holdings totaled over $213 billion and represented 21 percent of the known assets. However, when all assets are included, including those that cannot be specifically identified as employer securities, employer securities represented 12 percent of total assets. DOL’s analysis showed that for defined contribution plans in 1998, employer securities represented about 16 percent of total plan assets and less than 1 percent for defined benefit plans. Our analysis found that the employer securities holdings were concentrated in different industries, with the bulk of the holdings held by manufacturers, which included technology and computer companies. For plans that reported holding employer securities, most of the employer securities were concentrated in ESOPs, including ESOPs combined with other defined contribution plans. A significant portion of employer securities were also held in the companies’ 401(k) type plans. The largest dollar amounts of employer securities holdings were in companies whose retirement plans combined their 401(k) type plan with ESOPs. Because some companies reported holding their plan assets in master trust agreements, the amount of employer securities holdings in these firms’ employer plans are likely to be higher than we can determine based on 1998 Form 5500 data. About $213 billion in plan assets held in the employer-sponsored plans of the Fortune 1,000 were held in employer securities. Almost all of the $213 billion of assets in employer securities were held in the Fortune 1,000’s defined contribution plans. As shown in figure 1, less than 1 percent of defined benefit plan holdings were in employer securities and 24 percent of defined contribution holdings were in employer securities. The Fortune 1,000 sponsored roughly 3,500 defined contribution or defined benefit plans. Fifty-six percent, or about 2,000 of those plans, were defined contribution plans and 44 percent, or more than 1,500 plans, were defined benefit plans. More than 37 million employees were covered by these plans, which was nearly 40 percent of the total participants in all company plans in 1998. Twenty million employees participated in one or more defined contribution plans sponsored by the Fortune 1,000, and over 17 million employees were covered by defined benefit plans. Manufacturers had the highest amount of plan assets in employer securities of the Fortune 1,000. These companies included computer chip companies and technology firms, as well as traditional manufacturing companies, such as tool production and hardware firms. The sector held about $976 billion plan assets in 1998. As shown in table 1, manufacturing companies held about 45 percent of the employer securities holdings of the Fortune 1,000 and covered about 41 percent of plan participants of the Fortune 1,000. Although manufacturers held the highest amount of employer securities of the 12 sectors, such holdings represented less than 10 percent of the sector’s total assets. More than 90 percent of the manufacturing sector’s assets were held in assets other than employer securities, which provided for some diversification for the industry. The retail sector, which includes car, food, and clothing sales companies, had the highest concentration of industry assets in employer securities, with about 32 percent of the industries’ plans assets in employer securities. Companies in the industries of mining, construction, and agriculture had the lowest amounts of employer securities and also covered the fewest number of plan participants. Not surprisingly, ESOPs had the highest percentages of plan assets in employer securities of plans that reported holding such assets. ESOPs, including ESOPs combined with other defined contribution plans, held over three-fifths of their known assets in employer securities, while 401(k) type plans held a little over a quarter of their known assets in employer securities. Given the requirements that plans must meet to be designated as an ESOP, it is not surprising that ESOPs and ESOPs with other plan features hold the highest percentages of employer securities holdings. For example, ESOPs must be primarily invested in qualifying employer securities in order for the plan to receive the legal designation of an ESOP. In addition, in order to ensure that a company’s employees continue to hold that minimum threshold of company stock, many ESOPs restrict employees’ ability to sell their company stock. About 220 firms in the Fortune 1,000 sponsored plans that were ESOPs or ESOPs combined with other defined contribution plans. Those plans held a total of $143 billion in employer securities. Fifty-eight percent of ESOP total plan assets were in employer securities. However, certain types of ESOPs reported higher concentrations than others. For example, stand- alone ESOPs—ESOPs that are not combined with other defined contribution plans—had over 98 percent of plan assets in employer securities. Eighty-four companies sponsored such ESOPs, covering a little over 1 million participants. About 475 companies had defined contribution plans with a 401(k) type feature and such plans had the highest total dollar amount of employer securities totaling $172 billion in employer securities. Given that twice as many companies sponsored a 401(k) type plan as those offering an ESOP, the high dollar amounts in the 401(k) plans are not unusual. 401(k) type plans also held significant percentages of plan assets in employer securities, although not as high as ESOPs. For example, about 324 companies reported sponsoring 401(k) plans that were combined with profit-sharing/thrift savings plans, which was by far the most prevalent type of 401(k) plan offered by the Fortune 1,000 and covered more than half of the participants participating in 401(k) plans. Twenty-six percent of those plans’ assets were held in employer securities, totaling about $44 billion. The type of defined contribution plan that had the greatest amount of employer securities were plans that combined a 401(k) type plan with an ESOP. About 96 companies sponsored such plans, covering about 2.5 million employees. These plans held about $93 billion of employer securities and about 44 percent of all employer securities, which was the highest amount of employer securities holdings in any of the plan types sponsored by the Fortune 1,000. Recent industry data suggest that companies are increasingly sponsoring plans that combine features of defined contribution plans. For example, plans that combine ESOPs with a 401(k) type plan are becoming more prevalent among large, publicly traded companies. Because these plans hold the most employer securities, many more workers are likely to have a significant amount of their retirement savings invested in the securities of their employers. Retirement savings, therefore, may increasingly become more dependent on employer stock ownership. Defined benefit plans have smaller percentages of employer securities than ESOPs or 401(k) type plans. Seventy-five companies of the Fortune 1,000 sponsored defined benefit plans holding employer securities. Such plans covered 2.3 million participants and held about $120 billion of plan assets. Employer securities accounted for about 5 percent, or over $5 billion, of the known assets of these defined benefit plans. Finally, little information is reported on complex plan designs such as floor-offset arrangements. The 1998 Form 5500 did not require employers to identify plans with floor-offset arrangements. Furthermore, agency and industry officials said there is little information on the number of employer-sponsored plans that have such features. Because we cannot isolate employer securities held in “master trust agreements,” our figures on employer securities holdings are likely to be understated. A master trust agreement is a trust in which assets of more than one plan sponsored by a single employer or by a group of employers are held under common control. As shown in figure 2, master trust assets held the highest percentage of pension plan assets. The amount of employer securities plans held within master trust agreements cannot be determined from the 1998 Form 5500. For reporting purposes, assets of a master trust are considered to be held in one or more investment accounts that may consist of a pool of assets or a single asset. In addition, only the account total of the master trust account is required to be reported on the Form 5500. For example, 29 percent of the ESOPs sponsored by the Fortune 1,000 reported not holding employer securities. However, because ESOPs are required by law to hold employer securities, if such holdings are not reported under the ESOP account they are likely to be in the master trust agreement accounts. Consequently, our reported dollar amounts of employer securities are likely to understate the amount of plan assets held in employer securities. However, DOL officials said that few Fortune 1,000 companies are likely to hold a significant percentage of employer securities in master trust agreements. Recognizing the difficulty of identifying plan assets held in master trusts, DOL revised the Form 5500 for the 1999 plan filing year. Beginning with the 1999 filing year, master trusts will file a form 5500 report along with schedules itemizing the types of assets they hold. According to DOL officials, this will help ensure adequate reporting on the plan assets held in master trust investment accounts. In addition to employer securities holdings in master trust agreements, we also found basic filing errors in the data. While examples we found may understate or overstate our concentrations, we were not able to determine the extent to which such filings errors occurred. For example, we found filing errors such as the misreporting of employer securities as corporate debt instruments or stock (other than employer’s own common stock). In one case, we identified an ESOP that was reported to hold no securities. A DOL official reviewed this plan and, by examining an accountant’s report that accompanied the Form 5500, discovered that the plan actually held employer securities and had made a mistake in filling out the Form 5500— a mistake that, according to DOL officials, occurs frequently. Furthermore, data reported on the Form 5500 combines all employer securities into a single line item. Employer securities held by pension plans may include employer stock, a marketable obligation such as a bond or note, or an interest in a publicly traded partnership. Thus, the line item for employer securities does not accurately reflect the amount of pension plan assets solely in employer stock. Investment in employer securities through employer-sponsored retirement plans can present significant risks for employees. If the employees’ retirement savings is largely in employer securities or other employer assets, employees risk losing not only their jobs should the company go out of business, but also a significant portion of their savings. However, despite the risks, not every company whose employer plan has high concentrations of employer stock will result in employees incurring significant losses. Much depends on the decisions made by the company’s leadership and other factors such as market forces, which determine whether the company stays in business. Some companies help employees mitigate their risks by balancing plans where risks of loss are borne by employees with plans where employers bear such risk. In addition, some companies help employees limit their exposure to the risk of loss by allowing employees, if they so choose, to diversify their holdings. Concentrating their retirement savings in employer securities means that employees are not only concentrating their assets in a single security, but are investing in a security that is highly correlated to their work effort and earnings. Unlike investors, who have ownership in a company but do not work for the company, employees with high concentrations of holdings of employer securities in their retirement plans are subjecting two sources of income, their retirement income and their employment income, to similar risks. Such holdings directly expose the employee to the losses of the company they work for much more so than if they worked in another company. In addition, holding significant proportions of employer securities is directly at odds with modern financial theory, which says that diversifying a portfolio offers the benefits of reducing risks at very limited cost. Companies prefer to provide company contributions in employer stock for a number of reasons. Contributions in employer stock puts more company shares in the hands of employees who some officials believe are less likely to sell their shares if the company’s profits are less than expected or in the event of a threatened takeover. Companies also point out that contributing employer stock promotes a sense of employee ownership, linking the interest of employees with the company and other shareholders. In addition, employer stock contributions provide several tax benefits for companies. When employees choose to allocate a large portion of their total assets to their employer’s securities, they are assuming significant risk in order to achieve a particular expected rate of return. Studies have shown that employees feel a great deal of loyalty to their company. Because they work at the company and interact with the company’s managers, they believe they know the company and feel more comfortable investing in it. In addition, some employees enjoy being an owner-employee and some believe their employer’s stock will outperform the overall market over some particular time horizon. As a result, some employees consider investments in employer stock through their employer-sponsored plans a safe investment. However, employees who have significant portions of their retirement savings invested in employer stock may be exposing themselves to greater financial risks than necessary. Generally, financial theory indicates that, through diversification, an investor can achieve a similar expected rate of return with less risk than a portfolio concentrated in employer securities. The financial collapse of Enron and other companies, such as Color Tile and Southland, has highlighted how vulnerable participants are when they tie their retirement savings to their place of employment. For example, Enron employees lost their jobs and a significant amount of their retirement savings as the company became insolvent. The decline in Enron’s stock price and its subsequent failure substantially reduced the value of many of its employees’ retirement accounts. Enron’s stock price went from a high of $80 per share in January 2001, to less than $1 per share in January 2002. About 62 percent of the assets held in the company’s 401(k) consisted of shares of Enron stock. These concentrations are the result both of employee investment choice and employer matching contributions with employer stock. In all, about 20,000 employees lost money because their 401(k) accounts were heavily invested in Enron stock. Color Tile employees lost their jobs and their retirement savings when Color Tile filed for bankruptcy in January 1996. More than 83 percent of its $34 million in 401(k) plan assets were invested in Color Tile real property. During the bankruptcy, participant withdrawals or asset transfers in the 401(k) plan were prohibited until the property was appraised and sold. Southland Corporation employees incurred losses in their retirement savings. Southland’s pension plans included a 401(k) and profit-sharing plans. Fifty-eight percent of the assets in Southland’s 401(k) plan was used to buy 1,100 7-Eleven stores which were then leased back to the company. When Southland filed for Chapter 11 protection in October 1990, the 401(k) plan reduced its holdings in 7-Eleven stores to 46 percent of the assets in Southland’s 401(k) plan. Unlike Enron and Color Tile, the Southland Corporation emerged from bankruptcy fairly quickly, with relatively small job loss to its employees. See appendix II for additional details on of each company. Even without bankruptcy, employees are still subject to the dual risk of loss of job and retirement savings because corporate losses and stock price declines can result in companies significantly reducing their operations. For example, between December 31, 1999, and July 2001, Lucent Technologies’ stock price fell from $82 to $6 per share and employees’ account balances fell because about 30 percent of the company’s 401(k) plan assets were in employer securities. For nonmanagement employees, about one-third of Lucent’s workforce, the employer 401(k) match was in the form of an ESOP contribution made in employer stock. Employer contributions to Lucent’s management 401(k) plan were made in the form of employer stock. In addition, more than 29,000 workers were laid off as a result of the company’s financial troubles, although the company remains in business. There are various reasons for companies experiencing financial difficulties. Although recent company failures have been attributed to company mismanagement, companies can also experience difficulties because of such problems as business cycles, market downturns, and declines in a sector of the economy. Depending on the circumstances of the company, the employer’s stock price can experience a precipitous drop or it can decline gradually. In either case, substantial holdings of employer securities in employer-sponsored plans will be affected because of the company’s financial problems. Not every company whose employees have high concentrations of employer securities holdings will experience substantial losses in their plan assets. Much depends on the corporate decisions made by the company, which determine whether the company stays in business and the extent to which the company is forced, if necessary, to reduce operations. In addition, much depends on the extent that employer’s stock is affected by general market cycles or market volatility. Proponents of employer stock investments through employer plans point to numerous companies that have high concentrations of employer securities in their employer-sponsored plans and whose participants have not suffered as a result of such holdings. They state that high concentrations of employer securities are typically in large companies and that such companies have demonstrated long-term financial success. They also state that company performance improves when employees understand the relationship between their behavior and the accompanying rewards that accrue to them when they own employer stock. Two companies whose plans we reviewed had high concentrations of employer stock holdings and their employees had not suffered substantial losses in their retirement savings because of company failure or downsizing. Each company offered defined contribution plans in the form of profit-sharing, ESOPs, and 401(k) plans. The 401(k) plans at both companies allowed participants to contribute a portion of their salaries on a pre-tax basis, and the companies offered a variety of investment fund choices to give plan participants the flexibility and option of investing their 401(k) accounts. Overall, more than 57 percent of account balances at one company and up to 92 percent of the employees’ account balances at the other company are invested in employer stock. At one of the companies, 83 percent of the employees’ contributions to the 401(k) plan are invested in employer stock, and roughly 92 percent of the company’s contribution to employee accounts is invested in employer stock. Although each company’s stock price has experienced declines in the recent overall downturn in the stock market, such declines have not caused their employees to lose significant portions of their retirement savings. Company officials said that their company would continue to give their employees every opportunity to invest in employer stock. In addition, company officials said that despite the recent downturn in the market, plan participants have not significantly diversified out of the employer stock. Some companies help employees mitigate their exposure to risk by balancing the types plans where risks of loss are borne by employees with plans where employers bear such risk. When companies provide defined benefit plans, employees are likely to receive some level of retirement income even if they have incurred losses in their defined contribution plans. With a defined benefit plan, the employer, as plan sponsor, is responsible for funding the promised benefits, investing and managing the plan assets, and bearing the investment risk. If the defined benefit plans terminate with insufficient assets to pay promised benefits, the Pension Benefit Guaranty Corporation (PBGC) provides plan termination insurance to pay participant’s pension benefits up to a certain limit. For example, according to PBGC, Enron sponsored at least five defined benefit plans insured by PBGC. The largest of these plans covered about 20,000 participants. If one or more of Enron’s defined benefit plans is unable to pay promised benefits and is taken over by PBGC, vested participants and retirees will receive their promised benefits up to the limit guaranteed under ERISA. In addition, some companies help employees mitigate their exposure to the risk of loss by allowing employees, if they so choose, to diversify their holdings. Two companies whose plans we reviewed had few restrictions on their employees’ ability to diversify their holdings of employer securities. For example, one company allowed vested participants at any age to diversify out of employer stock in the company-contributed portion of their account. The other company allowed 100 percent diversification of employee 401(k) contributions, the company match, and the profit sharing contributions at all times. Several other companies have publicly announced easing restrictions on when employees can diversify employer contributions in their accounts. For example, one company announced in February 2002 that 401(k) plan participants could sell any of their individual account assets, including their employer match in employer securities, without restriction. Other companies have also lifted their restriction that required employees to hold their employer securities from company contributions until age 50. ERISA and the Securities Act of 1933 require DOL and SEC to ensure that appropriate disclosures are made to plan participants and investors regarding their investments. Under ERISA, companies with participant- directed individual account plans are to provide plan participants with certain information and disclosures beyond the general ERISA reporting requirements. The Securities Act of 1933 requires companies with defined contribution plans that offer employer stock to employees to register and disclose to SEC specific information about those plans. Under the current disclosure requirements of DOL, there is no requirement that companies disclose to plan participants the risks involved in investing in employer stock or the benefits of diversification. Industry representatives we spoke with said that companies provide employees with investment education and plan information and in some cases go beyond the minimum requirements. However, because there is no requirement to educate employees about the investment risks or the benefits of diversification, investment education can vary by company. Few employers make more specific individualized or tailored investment advice available to their plan participants, in part because of concerns about fiduciary liability. DOL has recently issued guidance about investment advice, which should help clarify when companies can use independent investment advisors to provide advice to participants in retirement plans. ERISA requires DOL to ensure that appropriate disclosures are made to plan participants regarding their ERISA-covered pension plans. While companies automatically make certain information available to plan participants, there is other information that participants must request in writing. Certain plans, which are designed to meet specific ERISA provisions, must provide plan participants with disclosures beyond what is generally required by ERISA. Compliance with this regulation is optional, but provides employers with a defense to fiduciary liability claims related to investment choices made by employees in their participant-directed accounts. ERISA requires companies to automatically disclose to plan participants certain information pertaining to their pension plans. These disclosures are the summary plan description (SPD), summary of material modifications (SMM), and the summary annual report (SAR). The SPD tells participants what the plan provides and how it operates. Specifically, the SPD provides information on when an employee can begin to participate in the plan, how service and benefits are calculated, when benefits become vested, when and in what form benefits are paid, and how to file a claim for benefits. ERISA states that the SPD must be written in a manner “calculated to be understood by the average plan participant” and must be “sufficiently comprehensive to apprise the plan’s participants and beneficiaries of their rights and obligations under the plan.” In other words, the disclosed information should be understandable and all- inclusive so participants can have useful information that will aid them in effectively understanding their pension plans. New employees must receive a copy of the most recent SPD within 90 days after becoming covered by the plan. In addition to the summary plan description, plan participants are entitled to receive a summary of material modifications. The summary of material modifications discloses any material changes or modifications in the information required to be disclosed in the SPD. Plan administrators must furnish participants with an SMM within 210 days after the close of the plan year in which the modification was made. Participants must also receive a summary annual report from their plan’s administrator each year. The summary annual report summarizes the plan’s financial status based on information that the plan administrator provides to DOL on its annual Form 5500. Generally, the SAR must be provided to participants no later than 9 months after the close of the plan year. Plan participants may also request additional information about their plans. If plan participants wish to learn more about their plan’s assets, they have the right to ask their plan administrator for a copy of the plan’s full annual report. In addition, a participant can request a copy of his or her individual benefit statement, which describes a participant’s total accrued and vested benefits. Plan participants can also request the documents and instructions under which the plan is established or operated. This includes the plan document, the collective bargaining agreement (if applicable), trust agreement, and other documents related to the plans. Under the 404(c) regulation, participants receive certain disclosures pertaining to the plan and its investment options. The regulation is a benefit to plan participants because it allows them to receive additional disclosure beyond what is generally required under ERISA. The purpose of these informational requirements is to “ensure that participants and beneficiaries have sufficient information to make informed investment decisions.” The regulation also benefits employers who comply with its requirements, because it exempts them from fiduciary liability related to the investment choices made by their employees in their participant- directed accounts. The regulation specifically requires that the plan administrator automatically provide the plan participant with (1) an explanation that the plan is a 404(c) plan and that the fiduciary will be relieved of liability; (2) a description of investment alternatives; (3) the identification of any designated investment managers; (4) an explanation of circumstances under which the participant may give investment instructions or limitations; (5) a description of transaction fees and expenses; and (6) the name, address, and telephone number of the fiduciary to contact for further information regarding these disclosures. In addition, for a plan with employer stock, plan administrators are to provide all voting information and the procedures for ensuring the confidentiality of participant investment transactions, as well as a prospectus immediately before or after the initial investment. Plan participants can also request certain plan information. This includes (1) a description of the annual operating expenses of the plan’s investment alternatives, including any investment management fees; (2) copies of any prospectuses, financial statements and reports, and other information furnished to the plan relating to investment alternatives; (3) the list of assets comprising the portfolio of each investment option that holds plan assets; (4) information about the value of shares or units in investment alternatives available along with information concerning past and current investment performance of each alternative; and (5) information pertaining to the value of shares or units in investment alternatives held in the participant’s account. Employers choose whether to provide disclosures under the regulation. Those who comply with the regulation are afforded certain protections from their fiduciary liability. First, compliance exempts plan fiduciaries from responsibility for investment decisions of employees when employees exercise control over their investments. However, the regulation establishes conditions employers must meet in order to be exempted from fiduciary liability related to investment choices made by participants. Employers must provide employees with the opportunity to choose from a broad range of investment options; allow employees to transfer the assets in their accounts into and out of the various plan investment options with a frequency that is reasonable in light of the market volatility of those investment options; and the plan’s investment options must permit employees to diversify their investments. If the plan meets the requirements of the regulation and a participant fails to diversify his or her account and invests all the account assets in his or her employer’s stock, the employer will be able to assert that the company is not responsible for any financial losses incurred by the participant because the company has complied with the regulation. Second, participants that manage the investments of their accounts are not considered to be fiduciaries. The employer is also not subject to potential fiduciary liability for the participant’s investment decisions. Plan sponsors are not relieved of all fiduciary responsibilities by complying with the regulation. For example, they remain responsible for the prudent selection of investment alternatives and monitoring plan investments on an ongoing basis. Because defined contribution plans require that employees assume the investment risk, securities law protections applicable to investors are relevant to plan participants. Employees in participant-directed plans might be given the choice of investing in securities, including employer securities, as well as a variety of mutual funds. The securities laws require disclosure of information about investment objectives, performance, investment managers, fees, and expenses of mutual funds and information about the business objectives, financial status, and management of companies that are issuing securities. However, distribution of these disclosure materials to plan participants making investments may depend on employer compliance with requirements of ERISA’s 404(c) regulations. In addition, interests in certain pension or profit-sharing plans are securities subject to the registration and antifraud requirements of the Securities Act of 1933 (1933 act), which we discuss in further detail in appendix III. Pension or profit-sharing plans that have the investment characteristics of securities are required to register under the 1933 act. Interests of employees in plans are securities where the employees voluntarily participate in the plan and their individual contributions can be used to purchase employer stock. This generally includes 401(k) salary reduction plans and savings plans where participant contributions are used to purchase employer securities. If employer securities are offered and sold to employees pursuant to a pension plan, those securities must be registered also. The 1933 act requires registration of securities being offered for sale to the public. The registration statement, which SEC makes publicly available, must disclose the basic business and financial information for the issuer with respect to the securities offering. SEC requires companies that offer securities to their employees under any employee benefit plan to register those securities on Form S-8. SEC generally makes the companies’ Form S-8 publicly available, but does not routinely review these forms. The SPD may be used to satisfy the prospectus delivery requirement applicable to Form S-8. However, the SPD is not filed with the SEC as part of the Form S-8. Although ERISA requires SPDs to be provided to participants, DOL no longer requires the SPD to be filed with the Department. SEC generally limits its review of corporate filings to ensure that the initial registration of the security and other reporting comply with its disclosure requirements. As part of its interpretive responsibility, SEC has no requirement in law or regulation to verify the accuracy or completeness of the information companies provide. SEC’s review of corporate filings may involve a full review, a full financial review, or monitoring of certain filings for specific disclosure requirements. In our work at SEC, we found that its ability to fulfill its mission has become increasingly strained, due in part to imbalances between SEC’s workload and staff resources. Like other aspects of SEC’s workload, the number of corporate filings has grown at an unprecedented rate. SEC’s 2001 goal was to complete a full financial review of an issuer’s annual report required by the Exchange Act in 1 of every 3 years—a review goal of about 30 to 35 percent of these annual reports per year. However, SEC only completed full or full financial reviews of 16 percent of the annual reports filed or about half of its annual goal in 2001. In this post-Enron environment, SEC plans to reconsider how it will select filings for review and plans to revise its approach for allocating staff resources to conduct those reviews. The SEC does not routinely review companies’ Forms S-8 for completeness or accuracy and has not routinely reviewed these filings for the last 20 years, according to SEC staff. SEC staff said that while they track the total number of Form S-8 filings each fiscal year. They do not separately track the number of filings for different types of plans, such as 401(k) plans or stock option plans. SEC staff can, however, take action against an issuer if it discovers that a Form S-8 does not comply with applicable law. For example, SEC has taken enforcement actions against companies that have abused the S-8 short form registration. In the late 1990’s, some companies had used Form S-8 filings inappropriately for raising capital and not for compensatory offerings for employee plans. Recently, SEC has placed increased emphasis on clear, concise and understandable language in prospectuses. SEC requires that in drafting disclosure documents, registrants should aim to write clearly and to provide for more effective communication. SEC implemented the plain English requirement for certain parts of the 1933 act prospectus. For example, with respect to mutual funds, SEC’s rules require that the prospectus should contain information appropriate for an average or typical investor who may not be sophisticated in legal or financial matters. ERISA was enacted in 1974 within the context of defined benefit pension plans where employers make plan investment decisions; consequently, ERISA does not require plan sponsors to make investment education or advice available to plan participants. Moreover, according to DOL officials, employers that sponsor pension plans are not required to provide educational materials on retirement saving and investing. Hence, employers are not required to provide information about the risks involved in investing in employer securities and the importance of diversification to a prudent investment strategy. Additionally, under ERISA, providing investment advice results in fiduciary responsibility for those providing the advice, while providing investment education does not. Industry officials that we spoke with said that many companies provide employees with investment education and plan information. Plan participants are given a number of investment education materials, such as newsletters, quarterly reports on participant accumulations, and annual reports with benefit projections. Companies also provide information to employees about their investment plan options. Employees are also provided information explaining the value of diversification. Furthermore, according to these officials, diversification is a theme that they emphasize in their investment education programs. Investment education varies by company in part because ERISA has no requirements about informing participants about investment risks or diversification. Industry officials that we spoke with told us that many companies voluntarily provide some investment education to plan participants and that they do so because education is needed to improve employees’ abilities to manage their retirement savings. However, because there is no standard format for investment education, companies provide employees with information that they believe is important to managing their retirement savings accounts and this information varies by employers. DOL does not monitor the type of investment education provided to plan participants and little is known about the accuracy and usefulness of the investment education programs and materials provided to employees. SEC provides broad investor education, only to the extent that it affects all investors, but it does not specifically target pension plan investors. Industry officials also said that providing investment education to employees does not necessarily mean that companies are providing information on the risks of holding employer securities. These officials said that telling plan participants that an investment may be risky or that an employee’s holdings are risky could be interpreted as providing investment advice. Consequently, companies provide general information about the benefits of diversification, but little information about the risk of holding certain investments, such as employer stock. Some studies also indicate that the type and amount of investment education varies by company. For example, one study by a benefits consulting firm found that 24 percent of their respondents reported that their companies offered investment information on an as-needed basis, and 11 percent reported that their companies offered no information at all. The remaining respondents said their companies offered detailed information, either on an ongoing basis (33 percent), or at plan enrollment and annually thereafter (32 percent). Industry officials told us that many companies do not offer investment advice mainly because of fiduciary concerns about the liability for such advice if it results in losses to the participant, even if the investment advisor is competent and there is no conflict of interest. Companies also have fiduciary concerns about the ability to select and monitor a competent investment advisor under ERISA’s prudence standard.Additionally, ERISA currently prohibits fiduciary investment advisors from engaging in transactions with clients’ plans where they have a conflict of interest, for example, when the advisors are providing other services such as plan administration. As a result, these investment advisors cannot provide specific investment advice to plan participants about their firm’s investment products without approval from DOL. Industry officials we spoke with said that more companies are providing plan participants informational sessions with investment advisors to help employees better understand their investments and the risk of not diversifying. They also said that changes are needed under ERISA to better shield employers from fiduciary liability for investment advisors recommendations to individual participants. In 1996, DOL issued guidance to employers and investment advisers on how to provide educational investment information and analysis to participants without triggering fiduciary liability. This guidance identifies and describes certain categories of investment information, and education employers may provide to plan participants. These categories are (1) information about the plan, (2) general financial information, (3) information based on “asset allocation models,” and (4) “interactive investment materials.” According to DOL, these investment education categories merely represent examples of investment information and materials that if furnished to participants would not constitute the rendering of investment advice. DOL has recently issued guidance about investment advice, which should help clarify when companies can use independent investment advisors to provide advice to participants in retirement plans. In 2001, DOL issued Advisory Opinion 2001-09A. This Advisory Opinion was a response to an application for exemption filed on behalf of SunAmerica Retirement Markets, Inc. (SunAmerica) with DOL, which sought exemption from the prohibited transactions restrictions. DOL determined that SunAmerica’s proposed method of issuing investment advice directly to plan participants would not violate the prohibited transaction provisions of ERISA. DOL’s ruling allows financial institutions to provide investment advice directly to retirement plan participants when the advice is based on the computer programs and methodology of a third party, independent advisor; therefore eliminating conflicts of interest. DOL officials said that they hope the Advisory Opinion ruling helps plans to sponsor the type of nonconflicted investment advice they are allowed to provide plan participants. The Enron collapse serves to illustrate what can happen under certain conditions when participants’ retirement savings are heavily invested in their employer’s securities. When the employer’s securities constitutes the majority of employees’ individual account balances and is the primary type of contribution the employer provides, employees are exposed to the possibility of losing more than their job if the company goes out of business or into serious financial decline—they are also exposed to the possibility of losing a major portion of their retirement savings. We presented other concerns about what can happen to employees’ retirement savings under certain conditions to the Congress in our testimony in February 2002. In addition to the issues of diversification and education, we suggested that further restrictions on floor-offset arrangements may be warranted. As our analysis shows, it is not unusual to find concentrations of employer securities in the plans of large firms such as the Fortune 1,000 that cover a significant portion of employees. To the extent these defined contribution plans become the primary component of employees’ retirement savings; these plans are most subject to risk of loss, and employees and policy makers should be concerned about the risks employees face by holding large portions of their retirement savings in employer securities. This is especially important as fewer companies are offering defined benefit plans that could provide some level of guaranteed retirement savings to employees even if they incur substantial losses in their defined contribution plans. Current ERISA disclosure requirements provide only minimum guidelines that companies must follow on the type of information they provide to plan participants. In addition, there is little government oversight of the information companies provide to plan participants. Consequently, the type and amount of information plan participants are receiving about their investments is not known. Improving the amount of disclosure provided to plan participants could help ensure that plan participants are at least getting some minimum level of information about investing, especially with regard to employer securities. In addition, providing plan participants with disclosures on the risks of holding employer securities and the benefits of diversification in mitigating employees’ losses may help employees make more informed decisions regarding the amount of employer securities they hold in their retirement plans. To address the lack of investment education and information provided to participants, the Congress should consider amending ERISA so that it specifically requires plan sponsors to provide participants in defined contribution plans with an investment education notice that includes information on the risks of certain investments such as employer securities and the benefits of diversification. We provided a draft of this report to the Department of Labor, the Department of the Treasury, and to the Securities and Exchange Commission for review and comment. We received written comments from the Department of Labor that are reprinted in appendix IV. DOL, SEC, and the Department of Treasury also provided technical comments on the draft. We incorporated each agency’s comments as appropriate. Included in the draft for DOL’s review was a recommendation to the Secretary of Labor to direct the Assistant Secretary, Pension and Welfare Benefits Administration, to require plan sponsors to provide participants in defined contribution plans with an investment education notice. DOL agreed with our conclusion that additional investment education is necessary, but stated that the Secretary of Labor does not currently have the legal authority under ERISA to require an investment education notice. Consequently, we changed our recommendation to a matter for consideration for the Congress to amend ERISA so that it requires plan sponsors to provide an education notice. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. We are sending copies of this report to the Secretary of Labor; the Secretary of the Treasury; and the Chairman, Securities and Exchange Commission. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this report, please contact Barbara Bovbjerg at (202) 512-7215, Richard Hillman at (202) 512-8678, George Scott at (202) 512-5932, or Debra Johnson at (202) 512-9603. Other major contributors include, Joseph Applebaum, Tamara Cross, Rachel DeMarcus, Jason Holsclaw, Raun Lazier, Carolyn Litsinger, Gene Kuehneman, Alexandra Martin-Arseneau, Corinna Nicolaou, Vernette Shaw, Roger Thomas, and Stephanie Wasson. To determine the number and types of private pension plans invested in employer securities, we analyzed plan financial information filed annually (Form 5500s) with the Internal Revenue Service and Pension and Welfare Benefits Administration(PWBA). The annual Form 5500 report is required to be submitted annually by the administrator or sponsor for any employee benefit plan subject to Employee Retirement Income Security Act (ERISA) as well as for certain employers maintaining a fringe benefit plan. It contains various schedules with information on the financial condition and operation of the plan. PWBA provided us with a copy of the complete 1998 electronic Form 5500 database and a preliminary 1999 electronic database for Form 5500s for our analysis. The 1998 database contained information from over 215,000 Form 5500 reports. We did not independently verify the accuracy of the Form 5500 databases. In addition, the data we analyzed were accurate only to the extent that employers exercised appropriate care in completing their annual Form 5500 reports. We decided to focus our analysis on the largest 1,000 corporations. In order to determine the Fortune 1,000 companies for our review, we used the “Fortune Magazines” listing of the largest corporations in the United States, which determines the largest corporations by looking at corporations’ revenue during the preceding year. After determining the 1,000 largest corporations, we analyzed data for the Fortune 1,000 companies (the corporations and their subsidiaries) for plan year 1998, which was the most recent year for which complete plan-specific Form 5500 data were available for our review. In order to review the Fortune 1,000’s Form 5500s, we matched the Fortune 1,000 companies to their pension plans on the basis of their Employer Identification Numbers (EINs). An EIN, known as a federal tax identification number, is a nine digit number that the IRS assigns to organizations. We used several methods to identify the EINs associated with the Fortune 1,000. We started with a list of EINs for over 500 companies that was provided to us by the Pension Benefit Guaranty Corporation (PBGC). To identify the EINs for the remaining companies we searched public filings, including 10-K statements filed with the SEC, using the search tools available through nexis.com. Where we could not find a company’s EIN and for companies whose EIN was not associated with a Form 5500, we conducted a text search of the electronic Form 5500 data to find plans sponsored by these companies. Additionally, we used 10-K filings for the Fortune 1,000 companies to identify major subsidiaries that might have their own pension plans. We conducted further text searches of the electronic Form 5500 data to identify pension plans for these subsidiaries. Our analysis includes information for subsidiaries to the extent we were able to identify them during our review. We eliminated from our analysis any Form 5500 returns that did not report end-of-year assets and also eliminated plans that did not report end of year participants. This resulted in a database containing the information of 3,480 Form 5500 returns filed by 996 of the Fortune 1,000 companies or their subsidiaries. Our totals for the number of plan participants include double counting of participants because some individuals may participate in more than one pension plan sponsored by the same employer. Because master trust holdings accounted for 45 percent of the assets held by the Fortune 1,000 employer-sponsored plans, we tried to identify employer securities held outside of master trusts. To calculate the percentage of pension plan assets held as employer securities, we first subtracted master trust assets from total plan assets to arrive at “known assets.” We then calculated the percentage of known assets comprised of employer securities to determine the percentage concentration of plan assets in employer securities. Plans holding assets in master trust accounts reported only the total asset value of these holdings and did not itemize or otherwise identify any the individual investments held by a master trust for 1998 Form 5500 filings. As such, we were unable to determine what fraction of that 45 percent consisted of employer securities. However, we analyzed preliminary 1999 Form 5500 data for master trust accounts and found that some of the assets reported by these master trust accounts were holdings of employer securities. To address the implications of investing in employer securities, we identified companies whose pension plans were heavily invested in company’s securities. We specifically looked for companies where employees have experienced substantial retirement losses similar to Enron and ones where the employees have benefited. Given the sensitivity and nature of our review, it was difficult to find companies that would speak with us and share their plans’ investment experiences, whether good or bad. However, we were able to find officials in two companies that were willing to discuss their pension plan and experiences. To identify and describe the implications of companies where employees have experienced significant losses due to bankruptcy or declines in the market valuations of the company’s stock, we obtained information about the company’s history and pension plans through U.S. news, trade industry reports, business journals, and company Web sites. We researched fraud cases on the world wide web; reviewed legal briefs and opinions outlining the details of lawsuits filed against the companies; and reviewed bankruptcy filings and proceedings to describe the history of events that lead the company to seek bankruptcy protection. To identify and describe situations where employees have not experienced significant losses, we interviewed two companies whose private pension plans are heavily invested in company securities. We developed a set of structured interview questions to obtain information about the companies, specifically background information and information about the company’s pension plans. We also reviewed and analyzed the company’s summary plan descriptions and prospectus to determine how the plans were administered and to identify requirements and restrictions of each plan. To report on the regulatory provisions for disclosures to participants owning employer stock through their employer-sponsored plans, we reviewed relevant laws and regulations and spoke with agency and industry officials. In order to understand the regulatory provisions for securities, we reviewed the Securities Act of 1933 and the Securities Act of 1934. Similarly, we reviewed ERISA and 404(c) regulations to under disclosure requirements for pension plans. We also reviewed past reports on private pension plans, ERISA, Employee Stock Ownership Plans (ESOPs), and issues regarding investment education and advice. We also spoke with Department of Labor (DOL) pension and legal experts and officials from the Securities and Exchange Commission’s Market Regulation Division, Investor Education Division, Corporate Finance Division, and the Enforcement Division. In order to determine the types of disclosures companies were providing to plan participants, we spoke with officials from the American Benefits Council, 401(k) Profit Sharing Council of America, the ESOP Association, the ERISA Industry Committee, the American Society of Pension Actuaries, the Investment Company Institute, and retirement plan administrators and financial service providers. To determine whether the SEC should reconsider its administrative determination not to explore application of the Securities Act and the Securities Exchange Act to defined contribution plans, we first researched what SEC’s determination had been and second determined whether SEC planned to reconsider its determination. We researched the relevant legal history and SEC’s position papers. We reviewed relevant securities laws, SEC regulations, and public SEC statements, as well as pertinent legal matters. We interviewed and discussed SEC’s position on the application of the Securities Act and the Securities Exchange Act to defined contribution plans with SEC’s legal counsel and appropriate SEC staff. Enron was engaged in the business of providing natural gas, electricity, and communications to wholesale and retail customers. Only months before its bankruptcy filing, the company was regarded as one of the most innovative, fastest growing, and best managed businesses in the United States. However, Enron’s problems did not arise in its core energy operations, but in other ventures, particularly “dot com” investments in Internet and communications businesses and in certain foreign subsidiaries. Rather than recognize these problems, the company assigned business losses to unconsolidated partnerships and other vehicles, which reportedly inflated its income. On December 2, 2001, the Enron Corporation filed for Chapter 11 bankruptcy protection. The decline in Enron’s stock price and its subsequent failure substantially reduced the value of many of its employees’ retirement accounts. Under Enron’s 401(k) type plan, participants were allowed to contribute from 1 to 15 percent of their eligible base pay in any combination of pre-tax salary deferrals or after-tax contributions subject to certain limitations. Participants were immediately fully vested in their voluntary contributions. Enron generally matched 50 percent of all participants’ pre-tax contributions up to a maximum of 6 percent of all employee’s base pay, with matching contributions invested solely in the Enron Corporation Stock Fund. Participants were allowed to reallocate their company matching contributions among other investment options when they reached the age of 50. On April 8, 2002, a class action suit was filed on behalf of the plan participants representing 24,000 current and former Enron employees who participated in Enron’s plans. The lawsuit alleges that the Enron Corporation Savings Plan Administrative Committee and other persons responsible for safeguarding the assets of the employee’s plans are liable for breaching their fiduciary duties under ERISA. In addition, the Department of Labor (DOL) has opened an investigation to determine whether there were any ERISA violations in the operation of the company’s employee benefit plans. DOL also reached an agreement with Enron to appoint an independent fiduciary to assume control of the company’s retirement plans. SEC had not taken any enforcement actions as of August 1, 2002. Color Tile’s financial problems began as a result of a 1993 business transaction that left the company undercapitalized, without the ability to service its debts and operate in a competitive fashion. In 1995, the company defaulted on a $10.4 million interest payment, forcing the company to seek relief under Chapter 11 of the bankruptcy code. In 1996, after 44 years in the floor-covering business and failing at several attempts to remain competitive in a changing flooring market, Color Tile sought Chapter 11 protection. One day after filing for bankruptcy protection, the company closed 234 of its 621 company-owned stores nationwide. After several attempts to save the company, Color Tile closed the remaining of its stores a year later affecting some 3,900 employees. Company executives blamed its financial troubles on slow flooring sales and competition from other centers. In 1996, a former Color Tile employee sued Color Tile, alleging mishandling of the plan assets, including investing the plan assets in Color Tile property. The employee won, and the settlement required the plan trustee and fiduciary carrier to pay about $4 million to Color Tile’s $34 million 401(k) plan. In 1993, DOL investigated Color Tile and found no violations. SEC did not open an investigation of Color Tile. The company began to experience financial difficulties as a result of a failed 1987 leveraged buyout. The value of the company’s stock declined, and the company found itself under $4.9 billion of debt, which it had incurred as the result of the 1987 leveraged buyout. In addition, the company lost $1.3 billion and then suddenly ran out of money to pay the interest on the debt, forcing the company to sell 58 of its convenience stores to a Japanese retailer. Southland’s pension plans included a 401(k) plan and a profit-sharing plan. Fifty-eight percent of the assets in Southland’s 401(k) plan was used to buy 1,100 7-Eleven stores and then leased back to the company. After its bankruptcy, Southland reduced its holdings in 7-Eleven stores to 46 percent of Southland’s 401(k) plan assets. In 1991, Southland’s Japanese partners acquired 70 percent of Southland’s common stock for $430 million. The cash infusion allowed the company to emerge from bankruptcy with its debt load reduced by 85 percent. Southland emerged from bankruptcy protection on March 5, 1991. Lucent Technology, which spun off from AT&T in 1996, at one time held a dominant position in the telecommunications equipment market. During the first quarter of fiscal year 2000, the company’s revenues began faltering as a result of the company’s inability to develop and deliver new products as the market required. In addition, Lucent developed problems with AT&T, its largest and most important customer. As a result, Lucent shares began falling in January 2000, when the company said its fourth-quarter profits would fall short. In subsequent quarters, the company kept cutting forecast and the shares kept plunging. Between December 31, 1999, and July 2001, Lucent shares declined from $70 to $6. In fiscal year 2001, Lucent posted a $16 billion loss and anticipated a large-scale layoff. Employer contributions to Lucent’s management 401(k) plan were made in the form of employer stock. For nonmanagement employees, about one-third of the Lucent’s workforce, the employer 401(k) match was in the form of an ESOP contribution made in employer stock. It is not clear to what extent participants were able to diversify their employer contributions. With some 30 percent of the company’s 401(k) plan invested in company stock, employee account balances declined when Lucent’s stock price fell. The collapse of Lucent’s stock sparked a class-action lawsuit by Lucent employees whose 401(k) accounts suffered losses. The suit alleges that Lucent breached its fiduciary duty for allegedly failing to inform employees that investing in Lucent stock was imprudent. The lawsuit also alleges that Lucent executives knew the company’s business was deteriorating, but continued to encourage participants and beneficiaries to make and maintain substantial investments in company stock. The case is currently pending the in the courts. SEC had not taken any enforcement actions as of August 1, 2002. The federal securities laws regulate the securities markets, the companies issuing securities, and market participants. The securities laws can relate to employee benefit plans in several ways. The interests of employees in the plan itself can be securities, or the plan may invest in instruments that are securities, such as stocks, bonds or interests in mutual funds. Finally, the plan may have investments in collective investment vehicles such as interests in pooled investment funds, bank common and collective trust funds, or insurance company pooled separate accounts. In most cases, participation interests in pension and profit-sharing plansare not required to register under the Securities Act of 1933 (1933 act). Registration is not required unless participation in the plan is voluntary and employee contributions can be used to purchase employer securities. Thus where a plan includes a 401(k) arrangement and employees can choose to invest in employer securities through voluntary salary reductions or deferrals, participation interests will be securities. Pension and profit-sharing plans that are required to register are permitted by SEC to use an abbreviated registration form and may use various documents, including a Summary Plan Description as the prospectus deliverable to employees. The company securities offered to employees through such a voluntary and contributory employee benefit plan must be registered under the 1933 act, unless an exemption is available. These offerings qualify for an abbreviated registration statement. Interests of plans in collective investment vehicles are also securities, but may be exempt from registration. The 1933 act requires the registration with the Securities and Exchange Commission (SEC) of all offers and sales of securities, unless an exemption from registration is available. The registration regime is based on the premise that investors are protected if all relevant features of the securities being offered are fully and fairly disclosed. Full disclosure is believed to provide investors with sufficient opportunity to evaluate the merits of an investment. A registration statement that meets the 1933 act’s disclosure requirements must be filed, unless one of the exemptions under section 3 or 4 of the 1933 act is available. The 1933 act also prohibits the use of fraud or misrepresentation in the offer or sale of a security, whether or not registration is required. Section 2(a)(1) of the 1933 act contains a broad definition of security, which includes any note, stock, treasury stock, bond, debenture, evidence of indebtedness, certificate of interest, or participation interest in an investment contract. The Securities Exchange Act of 1934 (Exchange Act) also imposes registration and reporting requirements upon issuers of certain securities. These requirements keep shareholders and markets informed about the issuer. Section 12(a) of the Exchange Act requires that all securities traded on a national exchange be registered with the SEC. The Exchange Act also requires an issuer to register if it has a class of equity securities held by more than 500 shareholders of record and more than $10 million in total assets. An issuer with a class of registered securities must file periodic reports, including quarterly and annual reports. With respect to the definition of security, the Supreme Court in SEC v. W.J. Howey Co. determined that “an investment contract for the purposes of the 1933 act means a contract, transaction or scheme whereby a person (1) invests his money (2) in a common enterprise and (3) is led to expect profits (4) solely from the efforts of a promoter or a third party.” In International Brotherhood of Teamsters v. Daniel, the Supreme Court found that an interest in a compulsory (all employees automatically participate), noncontributory (the employer makes all the contributions) defined benefit employee pension plan is not a security under the 1933 act’s definition. In determining that the interest in the plan did not meet the commonly understood definition of an investment contract, the Court focused on the factors set out in the Howey test. First, the Court found that an employee who participates in a noncontributory, compulsory pension plan makes no payment into the pension plan, and the employer’s payments into the plan do not relate to the individual benefit received by employees. Therefore, the investment portion of the Howey test is not satisfied in the case of a defined benefit plan. In addition, the Court found that because a major part of the retirement benefits were to be derived from the employer’s contributions, rather than from the efforts of the plan’s managers in investing the income, the plan did not have sufficient profit aspects to fall within the test for an investment contract in Howey. The Court also pointed out that the fact that ERISA comprehensively governs the use and terms of employee pension plans severely undercuts all arguments for extending the securities laws to noncontributory, compulsory pension plans. The Court explained that ERISA regulates the substantive terms of pension plans, setting standards for plan funding and limits on the eligibility requirements an employee must meet as well requirements for disclosure of specified information in a specified manner. In 1941, the SEC first stated its view that employee interests in pension and profit sharing plans generally are securities, but did not require registration of interests in the plans unless the plan provided for purchase of the employer’s stock. In SEC’s view, the burden of preparing a registration statement in connection with a pension plan could result in many employers not sponsoring pension plans. However, a registration requirement is justified if employer stock can be purchased, because the employer has a direct financial interest in the solicitation of employees’ contributions. This conclusion was based on the view that where employer stock is among the investment options, “it is not unfair to make the employer assume the same burdens which corporations typically assume when they go to the public for financing.” According to the Supreme Court’s opinion in the Daniel case, after 1941, SEC made no further efforts to register plan securities other than voluntary, contributory plans where the employees’ contributions were invested in the employer’s securities. Subsequent to the Daniel decision, the SEC issued two major interpretive releases, the first of which set forth views on when a participation interest in a pension plan is an investment contract and thus a security. Release No. 6188, dated February 1, 1980, reiterated the SEC’s view that, while employee interests in pension plans generally are securities, employee interests should be registered only when the plan is both voluntary and contributory and may invest in stock of the employer an amount greater than that paid into the plan by the employer. The release defines a “voluntary” plan as one in which employees may elect whether or not to participate, and a “contributory” plan as one in which employees make direct payments, usually in the form of cash or payroll deductions. This administrative practice is based on the SEC’s opinion that (1) registration serves no purpose where a plan is involuntary, since in that situation the participant is not permitted to make an investment decision, and (2) the costs of registration are a significant burden to an employer and should be imposed only where the employer has a direct financial interest in soliciting voluntary employee contributions. The 1980 release found that voluntary, contributory plans where an employee is permitted to invest in employer securities met the four parts of the Howey test defining an investment contract. First, the payment of cash or its equivalent by an employee satisfies the “investment” requirement. Second, the “common enterprise” requirement is met where the interests of employees in the plan are “separable” and possess “substantially the characteristics of a security.” In both defined contribution and defined benefit plans, there is a separate account maintained for each participant to the extent of each person’s contribution to the plan. Third, the “expectation of profits” requirement is met when the employee voluntarily contributes his or her own funds to the plan and can expect that the funds will generate profits through the efforts of the plan managers. In the Daniel case, the Court suggested that unless a defined benefit plan has a substantial dependence on earnings, as well as vesting requirements that are not excessively difficult to satisfy, there might be no expectation of profits. The 1980 release stated, however, that a voluntary, contributory defined benefit plan could meet the expectation of profits test because it may depend on earnings to pay promised benefits and because the vesting requirements under ERISA are much less strict than the requirement that was present in the Daniel case. Finally, the 1980 release stated that the “from the efforts of others” test was easily satisfied because the earnings generated by a plan would result from the efforts of the plan managers. SEC’s analysis concluded that the interests of employees in voluntary, contributory pension plans are securities within the meaning of the 1933 act. The staff also concluded that the interests are offered and sold to employees within the meaning of the 1933 act. Consequently, the interests are subject to registration requirements unless one of the exemptions from registration applies. Antifraud laws apply to all sales of securities. Section 3 of the 1933 act exempts various types of securities from the registration requirements, generally based on the nature of the issuer and the terms of the security. The statutory exemptions apply to the 1933 act’s registration requirements, but do not apply to prevent potential liability under the antifraud provisions. Section 3(a)(2) of the 1933 act exempts collective funding vehicles maintained by banks and insurance companies for employee benefit plans and the interests of employees in qualified plans, unless any employee funds can be used to purchase employer securities. In addition, if the plan does not restrict the plan’s overall investment in employer securities so that it cannot exceed the employer’s contribution, the exemption is not available, and the interests offered by the plan must be registered. Under the SEC’s analysis, registration will generally be required in connection with any plan that permits contributions from participants and permits all or any portion of these contributions to be applied to the purchase of employer stock. The SEC’s view that the 3(a)(2) exemption extends to pension plans is based on its reading of the legislative history of the provisions and its view that the section should be given a broad interpretation so as to exempt most plans. On January 15, 1981, the SEC issued Release No. 33-6281, an interpretive release providing further guidance on the application of the 1933 act to employee benefit plans. In the 1981 release, the staff expanded on the definition of a voluntary, contributory plan, explaining that the determination of whether a plan is voluntary and contributory depends solely on whether participating employees can decide at some point whether or not to contribute their own funds to the plan. The release also discussed the amendments to the section 3(a)(2) exemptions made by the Small Business Investment Incentive Act of 1980. The 1980 amendments broadened the scope of the exemption by including certain insurance contracts and governmental plans within its coverage. In addition, the amendments make clear that any security arising out of a contract with an insurance company will be exempt under section 3(a)(2) in connection with a plan specified in the section. In the 1981 release, the staff also discussed cash or deferred arrangements qualifying under section 401(k) of the Internal Revenue Code. Arrangements considered in the 1981 release allowed employees to elect to receive immediate payment of the employer’s plan contribution or to defer receipt and have it invested in a plan where it will accumulate for later repayment. The staff determined that these arrangements are not contributory on the part of employees because they did not involve out-of-pocket investments by employees of their own funds in employer stock. Instead, the plans are funded by employer contribution. However, subsequent to the 1981 release, the Treasury Department issued rules under section 401(k) that allowed plans to provide for pre-tax employee contributions through salary reduction. In a salary reduction plan, the employee elects to reduce his compensation and have the amount contributed to a plan. This type of salary reduction is considered to be an out-of-pocket contribution into the plan. Because such a plan is voluntary and contributory, plan interests would be securities. Registration of 401(k) plan interests in a salary reduction plan would be required if employee contributions are permitted to be invested in employer stock. Other Securities Act exemptions may apply to offers and sales of employer securities. In 1988, SEC adopted Rule 701 to exempt from 1933 act registration employee plans of employers that are not subject to the Exchange Act’s periodic reporting requirements. Rule 701 is available to a number of types of employee benefit plans. During a 12-month period, an offering may be exempt for an amount up to the greatest of $1 million, 15 percent of the total assets of the issuer, or 15 percent of the outstanding amount of the class of securities being offered and sold in reliance on section 701. Securities acquired under a Rule 701 offering are treated as restricted securities and may not be resold unless the 1933 act’s registration requirements are complied with or unless another exemption applies. Private and limited offerings also are exempt whether or not the company is subject to Exchange Act reporting. Under Section 3(b) of the 1933 actSEC may adopt regulations exempting issuers in the amount of $5 million or less. Under section 3(a)(11), “intrastate” offerings are exempt from registration where all aspects of the offering are within the confines of one state and are purely local in nature. Section 4(2) exempts transactions by an issuer not involving any public offering. This exemption applies to offerings to sophisticated institutional and individual investors who do not need the protections of federal registration. In SEC v. Ralston Purina Co., the Supreme Court determined that an offering to employees was not necessarily exempt as not involving a public offering. Ralston Purina made its stock available to all employees regardless of their connection with the company or knowledge of the business. Citing the design of the 1933 act to protect investors by promoting full disclosure of information necessary to informed investment decisions, the Court found that the employees were a class of persons that needed the protection offered by registration because they were not able to fend for themselves in connection with the transaction. An ESOP is a defined contribution plan that invests primarily in employer securities and usually distributes the securities upon the employee’s retirement. Under SEC’s analysis, an employee’s interest in a voluntary, contributory ESOP is a security. In Uselton v. Commercial Lovelace Motor Freight the Tenth Circuit held that an interest in a contributory and voluntary employee stock ownership plan was a security and that ERISA did not provide sufficient protection to displace the application of the federal securities laws. However, an interest in a mandatory stock ownership plan completely funded by the employer was held not to be a security in Matassarin v. Lynch. In May 1992, the SEC’s Division of Investment Management issued a study entitled, “Protecting Investors: A Half Century of Investment Company Regulation.” The study proposed that all pooled investment vehicles for participant-directed defined contribution plans be required to deliver prospectuses for the underlying investment vehicles to plan participants. The study reviewed the legislative history of the 1970 amendments to Section 3(a)(2) of the 1933 act and found that the basis for the exemption was concerns expressed by both the banking and insurance industries that the lack of a clear exemption under the securities laws for pooled investment vehicles might expose banks and insurance companies to civil liabilities. Congress exempted these pooled investment vehicles, in part, because they were subject to oversight by bank and insurance regulators. The interests issued by the pooled investment vehicles in question were still subject to the anti-fraud provisions of the 1933 act, notwithstanding the amendments. In addition, Congress assumed that the person making investment decisions for a plan (the sponsoring employer or a professional investment manager) was a sophisticated investor able to fend for itself with the application of only the 1933 act’s antifraud provisions. The study highlighted, however, that since the passage of the 1970 amendments, the character of employee benefit plans has shifted from defined benefit plans, in which the plan sponsor bears the investment risk, to participant- directed defined contribution plans, in which the plan participant bears the investment risk. Finding that the information received by plan participants was far less than the information received by investors who invest directly in securities issued by investment companies and other issuers, the Division of Investment Management expressed its view that disclosure to these plan participants should be improved. It recommended that the SEC send to Congress legislation that would: (i) remove the current exemption from registration in Section 3(a)(2) for interests in pooled investment vehicles consisting of assets of participant-directed defined contribution plans; and (ii) require delivery of the prospectuses and other disclosure documents of the pooled investment vehicles (other than mutual funds) to all plan participants. Subsequent to the issuance of the study, the DOL issued voluntary rules under Section 404(c) of ERISA that provide plan fiduciaries with a safe harbor from liability under certain conditions when plan participants exercise control over the assets in their individual accounts. One of the rule’s specific guidelines allowing fiduciaries of participant-directed plans potentially to avoid fiduciary liability is that plan participants who invest in securities that are subject to the 1933 act receive at or about the time of a participant’s initial investment in the securities a copy of the issuer’s most recent prospectus. In general, the guidelines obligate the plan sponsor to provide or make available to plan participants sufficient information so that they may make informed investment decisions. While the disclosures required by the 404(c) rules generally make more information available to plan participants by encouraging plan sponsors to provide or make available more information about the underlying investment options offered by the plan, the view of the Division of Investment Management is that plan participants have a continuing need for information in order to evaluate their investments, and decide whether to maintain or reallocate those investments. Accordingly, the approach of the Division of Investment Management would go farther by requiring delivery to plan participants of a current mutual fund prospectus on a continuing basis as well as delivery of annual and semi-annual shareholder reports by mutual funds and other underlying investment vehicles. If Securities Act registration of employee’s interests in an employee benefit plan is required, then Form S-8 is generally the appropriate form for use. Form S-8 is also used for registering employer securities issued in connection with employee benefit plans. Form S-8 is available only if the employer is subject to the Exchange Act reporting requirements. Form S-8 utilizes an abbreviated disclosure format that reflects the SEC’s distinction between offerings made to employees primarily for compensatory and incentive purposes and offerings made by registrants for capital-raising purposes. The SEC has exercised its rule-making authority to reduce the costs and burdens incident to registration of employee benefit plan securities. The SEC substantially revised Form S-8 in 1990. The revisions included making the registration statements effective automatically upon filing. A prospectus is customarily part of a registration statement, and contains the basic business and financial information about the issuer with respect to a particular securities offering. Investors use the prospectus to appraise the merits of the offering and make educated investment decisions. However, Form S-8 is the only registration form that does not require the registrant to prepare and file with the SEC a separate document to satisfy the prospectus delivery requirements under the federal securities laws. Instead, Form S-8 requires only that certain specified current plan information be delivered to employees in a timely fashion. No particular legal format is specified. The information could be provided in one or more documents prepared in the ordinary course of employee communications. Registrants can deliver materials required to be prepared for plan participants by ERISA and could deliver the Summary Plan Description as a basic disclosure document. The issuer must also supply participants with a written statement that certain documents are incorporated by reference into the prospectus, and advise the participant of their availability on request. These documents include the Exchange Act filings containing issuer information and financial statements. At the same time, the SEC also permitted 1933 act registration of an indeterminate amount of plan interests; simplified the calculation of filing fees; and amended Form 11-K, the Exchange Act annual report for employee benefit plans, to require only plan financial statements. Section 12(g) of the Exchange Act requires that registration statements be filed by issuers that have both a class of equity securities having more than 500 shareholders of record and more than $10 million in total assets. Companies must register their stock and satisfy all reporting requirements of the Exchange Act if these criteria are met. For purposes of determining the number of record holders of a class of securities, an employee benefit plan holding employer securities is counted as only one record holder. If the employer’s securities must be registered under the Exchange Act, the employer will incur periodic reporting obligations, including annual and quarterly reports, as well as filings reporting certain specified material changes in the issuer’s condition or operations. If the interests of the plan participants are considered securities, the plan may be subject to registration under the Exchange Act. However, interests in qualified plans are exempt from registration under the Exchange Act because Rule 12h-1 exempts from registration all interests in employee stock bonus, stock purchase, pension, profit sharing, retirement, incentive, or similar plans that are not transferable by the employee. Employee plans that are owners of securities that are registered under the Exchange Act may be subject to different Exchange Act reporting requirements. A plan that becomes the beneficial owner of more than 5 percent of a class of equity securities registered under the Exchange Act must file a report with the SEC on Schedule 13G. When a plan acquires stock for the benefit of officers and directors of an employer, the officers and directors are required to follow the Section 16 reporting requirements. Transactions of these company insiders may be subject to the short swing profit recovery rules if the insider switches into or out of an employer stock fund or takes a cash distribution from the fund in a “discretionary transaction” if the transaction occurs less than 6 months after any previous “opposite way” transaction. Section 10(b) of the Exchange Act prohibits the use of any manipulative or deceptive practices in connection with the purchase or sale of a security. Rule 10b-5 makes it unlawful for any person to make a material misstatement or omission in connection with the purchase or sale of a security. Section 10(b) and Rule 10b-5 will apply to material misrepresentations and omissions made to plan participants in connection with plan transactions that involve securities. Violations of Rule 10b-5 can be asserted by plan participants if the plan is making material misstatements or omissions in the materials the plan provides to participants in connection with a sale of company stock to plan participants. Rule 10b-5 can also apply to the purchase or sale of a security on the basis of material nonpublic information about that security in breach of a duty of trust or confidence. This could apply where an officer or director buys or sells shares through a plan and was aware of material non-public information when the transaction took place. Section 17(a) of the 1933 act prohibits fraud, material misstatements and omissions of fact in connection with the sale of securities. Section 17(a) applies whether the sale is registered or exempted from the 1933 act registration. Neither section 17(a) nor Exchange Act Rule 10b-5 imposes an affirmative duty to disclose, but can impose liability for omissions that make statements materially misleading. Historically, SEC has taken the position that interests in employee benefit plans can be securities for purposes of the 1933 act requirements to register offers and sales of securities. However, SEC has taken the view that offers and sales of plan interests are not subject to registration unless the plan allows employee funds to be used to purchase employer stock. In 1979, the U.S. Supreme Court decided that interests in plans where employees had no choice concerning participation and where employees did not make contributions to the plan were not securities and did not have to be registered. In the wake of the Supreme Court’s decision, SEC issued two releases indicating that only voluntary, contributory plans where employee funds could be invested in employer stock would be required to file registration statements. SEC’s position is based, in part, on its interpretation of the registration exemptions contained in section 3(a)(2) of the 1933 act. In SEC’s view, in light of the Daniel opinion, the 3(a)(2) exemption applies to all qualified employee plans, except those that allow the use of employee funds to purchase employer stock. While SEC’s 1980 release indicated that it did not favor a broader registration requirement, this release was issued when the prevalent plan was a defined benefit plan. SEC has not reconsidered its position as expressed in this 1980 release and believes it is bound by the Supreme Court’s decision in Daniel.
The financial collapse of large firms and the effects on workers and retirees has raised questions about retirement funds being invested in employer securities and the laws governing such investments. Pensions are important source of income of many retirees, and the federal government has encouraged employers to sponsor and maintain pension and savings plans for their employees. The continued growth in these plans and their vulnerabilities has caused Congress to focus on issues related to participants investing in employer securities through employer-sponsored retirement plans. GAO's analysis of the 1998 plan data for the Fortune 1,000 firms showed that 550 of those companies held employer securities in their defined benefit plans or defined contribution plans, covering 13 million participants. Investment in employer securities through employer-sponsored retirement plans can present significant risks for employees. If the employees' retirement savings is largely in employer securities in these plans, employees risk losing not only their jobs should the company go out of business, but also a significant portion of their savings. Even if employers do not declare bankruptcy, employees are still subject to the dual risk of loss of job and loss of retirement savings because corporate losses and stock price declines can result in companies significantly reducing their operations. Under the Employee Retirement Income Security Act and the Securities Acts, the Department of Labor and Securities and Exchange Commission (SEC) are responsible for ensuring that certain disclosures are made to plan participants regarding their investments. Although employees in plans where they control their investments receive disclosures under the act regarding their investments, such regulations do not require companies to disclose the importance of diversification or warn employees about the potential risks of owning employer securities. SEC requires companies with defined contribution plans that offer employees an opportunity to invest in employer stock to register and disclose to SEC specific information about those plans. In addition, in most cases the underlying securities of those plans must be registered with SEC. However, SEC does not routinely review these company plan filings because pension plans generally fall under other federal regulation.
We are all aware that certain key large-scale terrorist incidents at home and abroad since 1993 have dramatically raised the public profile of U.S. vulnerability to terrorist attack. The bombings of the World Trade Center in 1993 and of the federal building in Oklahoma City, Oklahoma, in 1995, along with terrorists’ use of a nerve agent in the Tokyo subway in 1995, have elevated concerns about terrorism in the United States—particularly terrorists’ use of chemical and biological weapons. Previously, the focus of U.S. policy and legislation had been more on international terrorism abroad and airline hijacking. The U.S. intelligence community, which includes the Central Intelligence Agency, the National Security Agency, the Federal Bureau of Investigation (FBI), and others, has issued classified National Intelligence Estimates and an update on the foreign-origin terrorist threat to the United States. In addition, the FBI gathers intelligence and assesses the threat posed by U.S. or domestic sources of terrorism. What is important to take away from these intelligence assessments is the very critical distinction made between what is conceivable or possible and what is likely in terms of the threat of terrorist attack. According to intelligence agencies, conventional explosives and firearms continue to be the weapons of choice for terrorists. Terrorists are less likely to use chemical and biological weapons than conventional explosives, although the likelihood that terrorists may use chemical and biological materials may increase over the next decade. Chemical and biological agents are less likely to be used than conventional explosives, at least partly because they are more difficult to weaponize and the results are unpredictable. According to the FBI, the threat of terrorists’ use of chemical and biological weapons is low, but some groups and individuals of concern are beginning to show interest in such weapons. Agency officials also have noted that terrorists’ use of nuclear weapons is the least likely scenario, although the consequences could be disastrous. The FBI will soon issue its report on domestic terrorist incidents and preventions for 1996. According to the FBI, in 1996, there were 3 terrorist incidents in the United States, as compared with 1 in 1995; zero in 1994; 12 in 1993; and 4 in 1992. The three incidents that occurred in 1996 involved pipe bombs, including the pipe bomb that exploded at the Atlanta Olympics. U.S. policy and strategy have evolved since the 1970s, along with the nature and perception of the terrorist threat. The basic principles of the policy continue, though, from the 1970s to today: make no concessions to terrorists, pressure state sponsors of terrorism, and apply the rule of law to terrorists as criminals. U.S. policy on terrorism first became formalized in 1986 with the Reagan administration’s issuance of National Security Decision Directive 207. This policy resulted from the findings of the 1985 Vice President’s Task Force on Terrorism, which highlighted the need for improved, centralized interagency coordination of the significant federal assets to respond to terrorist incidents. The directive reaffirmed lead agency responsibilities, with the State Department responsible for international terrorism policy, procedures, and programs, and the FBI, through the Department of Justice, responsible for dealing with domestic terrorist acts. Presidential Decision Directive (PDD) 39—issued in June 1995 following the bombing of the federal building in Oklahoma City—builds on the previous directive and contains three key elements of national strategy for combating terrorism: (1) reduce vulnerabilities to terrorist attacks and prevent and deter terrorist acts before they occur; (2) respond to terrorist acts that do occur—crisis management—and apprehend and punish terrorists; and (3) manage the consequences of terrorist acts, including restoring capabilities to protect public health and safety and essential government services and providing emergency relief. This directive also further elaborates on agencies’ roles and responsibilities and some specific measures to be taken regarding each element of the strategy. Now a new PDD on combating terrorism is being drafted that could further refine and advance the policy. This draft directive, which is classified, reflects a recognition of the need for centralized interagency leadership in combating terrorism. Among other things, the draft policy tries to resolve jurisdictional issues between agencies and places new emphasis on managing the consequences of a terrorist incident and on the roles and responsibilities of the various agencies involved. Based on the reports and work we have performed to date, we would like to make three observations. First, in certain critical areas, just as the Vice President’s Task Force on Terrorism noted in 1985, improvements are needed in interagency coordination and program focus. Since that time—and even since PDD-39 was issued in June 1995—the number of players involved in combating terrorism has increased substantially. In our September 1997 report, we noted that more than 40 federal agencies, bureaus, and offices were involved in combating terrorism. To illustrate the expansion of players since PDD-39, for example, Department of Agriculture representatives now attend counterterrorism crisis response exercise planning functions. Also, to implement the Nunn-Lugar-Domenici Domestic Preparedness Program, the U.S. Army’s Director of Military Support has created a new office for the new mission to train U.S. cities’ emergency response personnel to deal with terrorist incidents using chemical and biological WMD and plans to create another office to integrate another new player—the National Guard and Reserve—into the terrorism consequence management area. The National Guard and Reserve initially plan to establish 10 Rapid Assessment and Initial Detection (RAID), teams throughout the country. The U.S. Marine Corps has established the Chemical Biological Incident Response Force. Further, the Department of Energy has redesigned its long-standing Nuclear Emergency Search Team into various Joint Technical Operations Teams and other teams. At least one Department of Energy laboratory is offering consequence management services for chemical and biological as well as nuclear incidents. And the Public Health Service is in the process of establishing 25 Metropolitan Medical Strike Teams throughout the country in addition to 3 deployable “national asset” National Medical Response Teams and existing Disaster Medical Assistance Teams. There are many more examples of new players in the terrorism arena. Effectively coordinating all these various agencies’, teams’, and offices’ requirements, programs, activities, and funding requests is clearly important. We are currently examining interagency coordination issues as part of our work for this Subcommittee and Congressman Skelton in counterterrorism operations, exercises, and special events and in the Nunn-Lugar-Domenici Domestic Preparedness Program. In doing our work, we have observed some indications of potential overlap in federal capabilities to deal with WMD, and we plan to further assess this issue for you and Congressman Skelton. In a second, related observation, more money is being spent to combat terrorism without any assurance of whether it is focused on the right programs or in the right amounts. Our December 1997 report showed that seven key federal agencies spent more than an estimated $6.5 billion in fiscal year 1997 on federal efforts to combat terrorism, excluding classified programs and activities. Some key agencies’ spending on terrorism-related programs has increased dramatically. For example, between fiscal year 1995 and 1997, FBI terrorism-related funding and staff-level authorizations tripled, and Federal Aviation Administration spending to combat terrorism tripled. We also reported that key interagency management functions were not clearly required or performed. For example, neither the National Security Council nor the Office of Management and Budget (OMB) was required to regularly collect, aggregate, and review funding and spending data relative to combating terrorism on a crosscutting, governmentwide basis. Further, neither agency had established funding priorities for terrorism-related programs within or across agencies’ individual budgets or ensured that individual agencies’ stated requirements had been validated against threat and risk criteria before budget requests were submitted to the Congress. Because governmentwide priorities have not been established and funding requirements have not necessarily been validated based on an analytically sound assessment of the threat and risk of terrorist attack, there is no basis to have a reasonable assurance that funds are being spent on the right programs in the right amounts and that unnecessary program and funding duplication, overlap, misallocation, fragmentation, and gaps have not occurred. In part, as a result of our work, the National Defense Authorization Act for Fiscal Year 1998 (P.L. 105-85, Nov. 18, 1997) requires OMB to establish a reporting system for executive agencies on the budgeting and expenditure of funds for programs and activities to combat terrorism. OMB is also to collect the information and the President is to report the results to the Congress annually, including information on the programs and activities, priorities, and duplication of efforts in implementing the programs. OMB recently issued its first report to the Congress on enacted and requested terrorism-related funding for fiscal years 1998 and 1999, respectively. OMB reported that more than 17 agencies’ classified and unclassified programs were authorized $6.5 billion for fiscal year 1998, and $6.7 billion was requested for fiscal year 1999. OMB’s figures are lower than ours were for fiscal year 1997, but different definitions and interpretations of how to attribute terrorism-related spending in broader accounts could cause a difference of billions of dollars. What is important about the OMB effort is that it is a first step in the right direction toward improved management and coordination of this growing program area. But this crosscutting, or functional, view of U.S. investments in combating terrorism, by itself, does not tell the Congress or the executive branch whether or not the federal government is spending the right amounts in the right areas. Many challenges are ahead as we continue to see the need for (1) governmentwide priorities to be set; (2) agencies’ programs, activities, and requirements to be analyzed in relation to those priorities; and (3) resources to be allocated based on the established priorities and assessments of the threat and risk of terrorist attack. As an example of my last point, if an agency spends $20 million without a risk assessment on a security system for terrorism purposes at a federal building, and the risk of an attack is extremely low, the agency may have misspent the $20 million, which could have been allocated to higher risk items. Additionally, we see opportunities in the future to apply Government Performance and Results Act of 1993 principles to the crosscutting programs and activities intended to combat terrorism. The act requires each executive branch agency to define its mission and desired outcomes, measure performance, and use performance information to ensure that programs meet intended goals. The act’s emphasis on results implies that federal programs contributing to the same or similar outcomes should be closely coordinated to ensure that goals are consistent and program efforts are mutually reinforcing. In response to a separate requirement from the fiscal year 1998 Appropriations conference report (House Report 105-405), the Department of Justice is drafting a 5-year interdepartmental counterterrorism and technology crime plan. The plan, due to be completed by December 31, 1998, is to identify critical technologies for targeted research and development efforts and outline strategies for a number of terrorism-related issues. In developing the plan, Justice is to consult with the Departments of Defense, State, and the Treasury; the FBI; the Central Intelligence Agency; and academic, private sector, and state and local law enforcement experts. While Justice’s efforts to develop an interagency counterterrorism and technology crime plan are commendable, this plan does not appear to have been integrated into the agencywide Government Performance and Results Act planning system. Justice’s 1999 annual performance plan contains a section on reducing espionage and terrorism, and it does not mention the 5-year plan or how it plans to coordinate its counterterrorism activities with other agencies and assess inputs, outputs, and outcomes. Justice has recognized that it needs to continue to focus on developing and improving crosscutting goals and indicators. Our third observation is that there are different sets of views and an apparent lack of consensus on the threat of terrorism—particularly WMD terrorism. In our opinion, some fundamental questions should be answered before the federal government builds and expands programs, plans, and strategies to deal with the threat of WMD terrorism: How easy or difficult is it for terrorists (rather than state actors) to successfully use chemical or biological WMDs in an attack causing mass casualties? And if it is easy to produce and disperse chemical and biological agents, why have there been no WMD terrorist attacks before or since the Tokyo subway incident? What chemical and biological agents does the government really need to be concerned about? We have not yet seen a thorough assessment or analysis of these questions. It seems to us that, without such an assessment or analysis and consensus in the policy-making community, it would be very difficult—maybe impossible—to properly shape programs and focus resources. Statements in testimony before the Congress and in the open press by intelligence and scientific community officials on the issue of making and delivering a terrorist WMD sometimes contrast sharply. On the one hand, some statements suggest that developing a WMD can be relatively easy. For example, in 1996, the Central Intelligence Agency Director testified that chemical and biological weapons can be produced with relative ease in simple laboratories, and in 1997, the Central Intelligence Agency Director said that “delivery and dispersal techniques also are effective and relatively easy to develop.” One article by former senior intelligence and defense officials noted that chemical and biological agents can be produced by graduate students or laboratory technicians and that general recipes are readily available on the internet. On the other hand, some statements suggest that there are considerable difficulties associated with successfully developing and delivering a WMD. For example, the Deputy Commander of the Army’s Medical Research and Materiel Command testified in 1998 about the difficulties of using WMDs, noting that “an effective, mass-casualty producing attack on our citizens would require either a fairly large, very technically competent, well-funded terrorist program or state sponsorship.” Moreover, in 1996, the Director of the Defense Intelligence Agency testified that the agency had no conclusive information that any of the terrorist organizations it monitors were developing chemical, biological, or radiological weapons and that there was no conclusive information that any state sponsor had the intention to provide these weapons to terrorists. In 1997, the Central Intelligence Agency Director testified that while advanced and exotic weapons are increasingly available, their employment is likely to remain minimal, as terrorist groups concentrate on peripheral technologies such as sophisticated conventional weapons. Mr. Chairman, that concludes our prepared statement. we would be happy to answer any questions at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed its work and observations on federal efforts to combat terrorism, focusing on the: (1) foreign-origin and domestic terrorism threat in the United States; and (2) origins and principles of the U.S. policy and strategy to combat terrorism. GAO noted that: (1) conventional explosives and firearms continue to be the weapons of choice for terrorists; (2) terrorists are less likely to use chemical and biological weapons than conventional explosives, although the likelihood that they may use chemical and biological materials may increase over the next decade, according to intelligence agencies; (3) more than a decade ago, the Vice President's Task Force on Terrorism highlighted the need for improved, centralized interagency coordination; (4) GAO's work suggests that the government should continue to strive for improved interagency coordination today; (5) the need for effective interagency coordination--both at the federal level and among the federal, state, and local levels--is paramount; (6) the challenges of efficient and effective management and focus for program investments are growing as the terrorism issue draws more attention from Congress and as there are more players and more programs and activities to integrate and coordinate; (7) the United States is spending billions of dollars annually to combat terrorism without assurance that federal funds are focused on the right programs or in the right amounts; (8) as GAO has emphasized in two reports, a critical piece of the equation in decisions about establishing and expanding programs to combat terrorism is an analytically sound threat and risk assessment using valid inputs from the intelligence community and other agencies; (9) threat and risk assessments could help the government make decisions about: (a) how to target investments in combating terrorism and set priorities on the basis of risk; (b) unnecessary program duplication, overlap, and gaps; and (c) correctly sizing individual agencies' levels of effort; and (10) finally, there are different sets of views and an apparent lack of consensus on the threat of terrorism--particularly weapons of mass destruction terrorism.
Publicly traded companies are generally required by state law to hold annual meetings to conduct business that requires shareholder approval. U.S. public companies hold their annual meetings to consider key management and shareholder proposals that may have an effect on a company’s operations and value, such as executive compensation and director elections, or other more routine issues that may not affect value, such as changing a corporate name or approving an auditor. They also occasionally hold special meetings during the year to consider key issues such as proposed mergers and acquisitions. Shareholders are provided advance notice of annual and special shareholder meetings through a written proxy statement, which typically includes a proxy ballot (also called a proxy card) that allows shareholders to appoint a proxy to vote on the shareholder’s behalf if the shareholder decides not to attend the meeting. Proxy voting can be conducted online, by mail, or by telephone. Shareholders may instruct the proxy how to vote the shares or grant the proxy discretion to make the voting decision. Because of their large stockholdings, institutional investors (such as investment advisers, insurance companies, mutual funds, and pension plans) cast the majority of proxy votes. In general, proxy voting in shareholder meetings involves several key participants such as shareholders (including institutional investors), corporate issuers, proxy agents, and proxy advisory firms (see table 1). Institutional investors frequently hire proxy advisory firms to provide analysis and proxy voting recommendations and facilitate voting, record keeping, reporting, and disclosure requirements. For instance, the mechanics of tracking proxy cut-off times, managing and analyzing proxy materials, and casting votes can require significant resources. Many institutional investors use a proxy advisory firm to help perform some or all of these services. While proxy advisory firms perform services year- round, most of the services center on the proxy season. Some publicly- traded companies also may use a proxy solicitor to identify, locate, and communicate with shareholders to secure votes on certain issues. Currently, the proxy advisory industry in the United States consists of five firms: Institutional Shareholder Services (ISS), Glass Lewis & Co. (Glass Lewis), Egan-Jones Proxy Services (Egan-Jones), Marco Consulting Group (Marco Consulting), and ProxyVote Plus. ISS, founded in 1985, provides research and analysis of proxy issues, custom policy implementation, vote recommendations, vote execution, governance data, and related products and services. ISS also provides advisory/consulting services, analytical tools, and other products and services to corporate issuers through ISS Corporate Solutions, Inc. (a wholly owned subsidiary). ISS is owned by Vestar Capital Partners, a private equity firm, and company management. As of September 2016, ISS had more than 900 employees in 18 offices in 12 countries, and covered approximately 39,000 meetings in 115 countries. ISS had about 1,600 institutional investor clients and executed more than 8.5 million ballots annually on behalf of those clients. Glass Lewis, established in 2003, provides proxy research and analysis, custom policy implementation, vote recommendation, vote execution, and reporting and regulatory disclosure services to institutional investors. Glass Lewis is an independent portfolio company of the Ontario Teachers’ Pension Plan Board and Alberta Investment Management Corporation. As of September 2016, Glass Lewis had more than 350 employees and offices in San Francisco, New York, Ireland, Australia, and Germany that provide services to more than 1,200 institutional investors that collectively manage more than $20 trillion. Egan-Jones Proxy Services was established in 2002 as a division of Egan-Jones Ratings Company. Egan-Jones provides proxy services, such as notification of meetings, research and recommendations on selected voting issues, voting guidelines, execution of votes, and vote disclosure. As of September 2016, Egan-Jones Ratings Company had approximately 450 clients of all types firm-wide including funds, institutions, corporate issuers, and public entities. Of these, Egan- Jones’ proxy research or voting clients mostly consisted of mid- to large-sized mutual funds. Egan-Jones covers approximately 40,000 companies. Many of its largest institutional clients use Egan-Jones research to augment their own research. Egan-Jones is based in Haverford, Pennsylvania. Marco Consulting Group, an Illinois-based firm, was established in 1988 to provide investment analysis and advice, and proxy voting services to a large number of Taft-Hartley and public benefit plans. As of September 2016, Marco Consulting served 300 clients with assets of $145 billion. Marco Consulting uses ISS as the provider for its proxy voting platform and reporting. Marco Consulting also subscribes to research services from ISS. It has offices in Chicago, Boston, and Denver. ProxyVote Plus, also based in Illinois, is an employee-owned firm established in 2002 to provide proxy voting services to Taft-Hartley fund clients. ProxyVote Plus conducts internal research and analysis of voting issues and executes votes based on its guidelines. ProxyVote Plus reviews and analyzes proxy statements and other corporate filings, and reports annually to its clients on proxy votes cast on their behalf. As of September 2016, ProxyVote Plus had more than 200 clients throughout the United States and Canada. Of the five firms, ISS and Glass Lewis are the largest and most often used by institutional investors. To compete, proxy advisory firms must offer comprehensive coverage of corporate proxies and use sophisticated systems to provide research and proxy vote execution services. As we reported in 2007, ISS’s long-standing history—since 1985—of working with institutional investors, as well as its reputation for providing comprehensive proxy voting research and recommendations, makes it the most dominant proxy advisory firm. We found that ISS’s dominance makes it difficult for competitors to attract clients and compete in the market. We also reported that institutional investors may be reluctant to subscribe to a potentially inexperienced or less-established proxy advisory firm that may not provide thorough coverage of all of their institutional holdings. According to market participants and other stakeholders with whom we spoke, these conditions continue to exist, and, among other things, the initial investment required to develop and implement the necessary technology is a significant expense for firms. Under the Securities Exchange Act of 1934 (Exchange Act), SEC regulates the proxy solicitation process for publicly traded equity securities. SEC also regulates the activities of proxy advisory firms that are registered with SEC as investment advisers under the Investment Advisers Act of 1940 (Advisers Act). Under SEC rules, when soliciting proxies, certain information must be disclosed in writing to shareholders in a document referred to as a proxy statement. These proxy statements must include important facts about the issues on which shareholders are asked to vote. A party soliciting proxies must file such proxy statement with SEC unless it is exempt under the proxy rules. Under the Advisers Act and related SEC rules, registered investment advisers are subject to a number of regulatory requirements that provide important protections to the firm’s clients. For example, an investment adviser must disclose information about its business practices and potential conflicts of interest to clients and prospective clients. Additionally, registered investment advisers are required to adopt and implement written policies and procedures reasonably designed to prevent violation of the Advisers Act. Finally, regardless of whether a proxy advisory firm is registered as an investment adviser, all firms that meet the statutory definition of investment adviser, and are unable to rely on an exclusion from the definition, are subject to the antifraud provisions of the Advisers Act. This act prohibits investment advisers from engaging in any act, practice, or course of business that is fraudulent, deceptive, or manipulative. Table 2 describes whether and how proxy advisory firms are registered with SEC. ISS, Marco Consulting, and ProxyVote Plus are registered as investment advisers and, according to their SEC registration filings, identified their work as pension consultants as the basis for registering as advisers. Egan-Jones Ratings Company (Egan-Jones’s parent company) is registered as a Nationally Recognized Statistical Rating Organization and must meet certain regulatory requirements related to its credit ratings activity, but these requirements do not apply to its proxy advisory services. Glass Lewis is not registered with SEC. SEC also has issued several rules and policy documents that provide guidance on proxy voting by investment advisers and investment companies. For example, SEC issued a final rule in February 2003 that addresses an investment adviser’s fiduciary responsibilities to clients when the adviser has the authority to vote their proxies, including adopting policies and procedures to ensure proxies are voted in the best interest of clients. The rule also requires that an adviser must (i) disclose to clients how they can obtain information from the adviser on how their securities were voted and (ii) describe the adviser’s proxy voting policies and procedures to clients, and upon request, provide clients with a copy of those policies and procedures. SEC issued another final rule in February 2003 that requires investment companies such as mutual funds to disclose how they vote proxies relating to portfolio securities they hold, and file with SEC and make available to shareholders information about specific proxy votes cast. In May 2004 and September 2004, SEC staff issued guidance that, among other things, clarified how an investment adviser could resolve conflicts of interest in voting clients’ proxies and ensure that proxy advisory firms could adequately analyze proxy issues and make recommendations in the best interests of the adviser’s clients. We focus on SEC oversight since 2007 later in this report. SEC monitors compliance with the federal securities laws and regulations through risk-based examinations of registered investment advisers. Based on examination findings, SEC may send letters to investment advisers, including proxy advisory firms registered as investment advisers, requesting that they correct identified deficiencies. SEC may take enforcement actions for more serious violations. Proxy voting issues and proxy advisory firms may not be examined on a regularly scheduled basis because SEC uses a risk-based approach to identify examination priorities each year. Among other things, SEC may consider the risk of an entity based on prior examination findings; significant changes in a registrant’s business activities or disclosures regarding regulatory or other action brought against them; and tips, complaints, or other referrals. SEC uses this approach to help allocate its limited resources to focus on those registrants that examination staff believe place the investing public or market integrity most at risk. International regulatory organizations, including the European Securities and Markets Authority and Canadian Securities Administrators, have taken actions to promote increased engagement among market participants and transparency into proxy advisory firms’ processes. In recent years, these organizations conducted reviews of the proxy advisory firm industry and concluded that regulatory intervention was not needed. Specifically, the European Securities and Markets Authority concluded that regulation was not justified because there was no evidence of a market failure in relation to how proxy advisory firms interact with institutional investors and corporate issuers. However, both entities proposed guidance and recommendations for the firms to enhance transparency, among other issues. In a 2013 report, European Securities and Markets Authority officials recommended the creation of an industry code of conduct. Subsequently, a group of proxy advisory firms, including ISS and Glass Lewis, published a set of best practice principles that included disclosing their (1) research methodology and, if applicable, general voting policies; and (2) policies for communication with corporate issuers, shareholder proponents, other stakeholders, media, and the public. In December 2015, European Securities and Markets Authority released a follow-up to its 2013 report responding to the establishment of best practice principles. This report concluded that the best practice principles had a positive impact on the market, mainly in terms of enhanced clarity for different stakeholders on how proxy advisory firms operate. The report also stated that while the majority of the industry is signatory to the principles, including ISS and Glass Lewis, broader sign-up to the principles would contribute to establishing the principles as the prevailing standard in the industry. ISS and Glass Lewis have posted statements of compliance on their websites that describe how they apply the principles in their work. In April 2015, the Canadian Securities Administrators adopted the National Policy 25-201 Guidance for Proxy Advisory Firms. The policy is intended to promote transparency in the process leading to vote recommendations and the development of proxy voting guidelines, and foster understanding among market participants about the activities of proxy advisory firms. The guidance is not intended to be prescriptive but rather encourage proxy advisory firms to consider the guidance in developing and implementing practices that are tailored to their structure and activities. The market for proxy advisory firms has grown, with higher demand stemming from factors including the rise of institutional investing and the effect of some new policies and requirements. Recent studies and the market participants and other stakeholders with whom we spoke agreed that proxy advisory firms influenced shareholder voting and corporate governance practices. But market participants and stakeholders had mixed views about the extent of this influence and some said that influence can vary based on the size of the institutional investor or the voting policies used. Studies we reviewed also did not agree on the extent of the influence or whether it was helpful or harmful. The market for proxy advisory firms has grown over the last 30 years as institutional investors have relied more on firms to provide research, analysis, and vote recommendations. According to academic and industry studies, the increased demand for proxy advisory services stems from several factors, including the growth in the proportion of shares owned by institutional investors, the number and complexity of voting issues, and shareholder activism and the effect of some new policies and requirements. Some of these issues are consistent with themes we identified in 2007. Institutional Ownership. The increased ownership share that institutional investors hold and the high volume of proxy votes they are responsible for casting has increased demand for proxy advisory firms. According to a recent Broadridge and PwC report, in 2016 institutional investors owned 70 percent of shares outstanding in U.S. public companies compared with retail investors (or individual investors) who owned 30 percent of shares outstanding. Institutional investors also have voted at much higher rates; for example, as of June 2016, 91 percent of institutional investors voted their shares compared with 28 percent of retail investors. Because many institutional investors use the services of proxy advisory firms, increased institutional ownership has resulted in a greater demand for these firms. Number and Complexity of Voting Issues. Some institutional investors may lack the resources to consider the many complex proxy issues that come before them for a vote and instead may opt to use the services of a proxy advisory firm, which adds to the demand for the firms. For example, the Dodd-Frank Wall Street Reform and Consumer Protection Act requires a shareholder advisory vote on executive compensation (“say- on-pay”). The act allows shareholders to vote their opinion on executive compensation plans every 1–3 years, thereby increasing the volume of shareholder votes on this issue. Institutional investors also have become more involved in a range of corporate governance and other issues such as board composition and diversity, executive severance agreements (including “golden parachutes”), strategy and growth, and sustainability and climate change that can require extensive analysis. Thus, the growing number and complexity of proxy voting issues has also contributed to the increased demand for proxy advisory firms. Shareholder Activism and Regulation. Proxy advisory firms also have become more prominent because of continued shareholder activism and the impacts of some regulations. For example, many institutional investors seek the services of proxy advisory firms to assist in their assessments of corporate governance practices and carry out the mechanics of proxy voting. As discussed earlier, in 2003, SEC adopted a final rule that required registered investment advisers, among other things, to adopt policies and procedures reasonably designed to ensure that they vote proxies in the best interests of clients. According to some industry stakeholders, based on certain interpretations of the rule and subsequent SEC staff guidance, some investment advisers determined that they could discharge their duty to vote their proxies and demonstrate that their vote was not a product of a conflict of interest if they voted based on the recommendations of a proxy advisory firm. As a result, institutional investors tended to outsource their research and voting decisions, which helped to increase the demand for proxy advisory services. However, in 2014, SEC staff issued a Staff Legal Bulletin that, among other things, included guidance on investment advisers’ responsibilities in voting client proxies and retaining proxy advisory firms, including notice that investment advisers are not required to vote every proxy, depending on the proxy voting arrangements between advisers and their clients. We discuss other aspects of this guidance later in the report. Recent studies, market participants, and other stakeholders agree that proxy advisory firms have influence on shareholder voting and corporate governance practices, but had mixed views about the extent of their influence. Our review of four recent studies that analyzed the extent to which proxy advisory firms’ recommendations influenced voting decisions and shifted some fraction of the votes shows that proxy advisory firms have influence on shareholder voting. For instance, using a sample of director elections, a 2009 study found that ISS recommendations have an impact on shareholder votes, and directors receiving a negative ISS recommendation receive 19 percent fewer votes. However, a 2010 study concluded that while both ISS and Glass Lewis appear to have a meaningful impact on shareholder voting, media reports often overstate the extent of ISS’s influence on voting. The study found that the impact of an ISS recommendation is reduced once director- and company- specific factors that are important to investors—failure to attend board meetings, financial performance, corporate misconduct, and a lack of responsiveness to shareholders—are taken into consideration. Unlike higher estimates, the analysis showed that an ISS recommendation shifted 6–10 percent of shareholder votes. Additionally, a 2013 study concluded that proxy advisory firm recommendations are the key determinant of voting outcomes in the context of mandatory “say-on-pay” votes. The study found that negative ISS and Glass Lewis recommendations are associated with 25 percent and 13 percent more votes against the compensation plan, respectively. The study also found that the relationship between proxy advisory firm recommendations and shareholder votes varies based on the rationale behind the recommendation and the institutional investor’s ownership structure. For example, the relationship between negative recommendations and shareholder votes is weaker for shareholders with larger holdings and, thus, presumably greater incentives to perform their own internal research. The study concluded that this suggests that at least some shareholders are not directly influenced by the recommendations and take into account the underlying basis for the recommendation and other relevant factors. A 2015 study also found that proxy advisory firms have an effect on voting outcomes related to say-on- pay proposals. Specifically, the study concluded that negative ISS recommendations reduce the percentage of votes in favor of say-on-pay proposals by about 25 percentage points. Similarly, our interviews with market participants and other stakeholders showed mixed views on the extent of influence proxy advisory firms have on voting. Most of the 13 institutional investors,11 corporate issuers, 4 proxy solicitors, and 8 industry association representatives with whom we spoke stated that proxy advisory firms (more specifically, ISS and Glass Lewis—the two firms with the largest number of institutional investor clients) have influence on shareholder voting. However, some investors, solicitors, and investor association representatives said that proxy advisory firms had little influence and that such influence varied based on the size of the institutional investor or whether the institutional investor uses its own or the proxy advisory firm’s research and voting policies. Specifically, they told us that the level of influence that ISS and Glass Lewis have on voting and corporate governance is minimal because large institutional investors cast the majority of proxy votes and do not exclusively rely on the research and vote recommendations offered by proxy advisory firms to help decide how to vote proxies. We previously found in 2007 that large institutional investors, which cast the great majority of proxy votes made by all institutional investors, placed less emphasis on proxy advisory firms’ research and recommendations than smaller institutional investors, and tended to have their own in-house research staffs to conduct research that drove their proxy voting decisions. Some institutional investors and investor association representatives with whom we spoke also said that the firms’ influence has significantly declined in recent years, as some institutional investors—in particular, asset managers (such as investment advisers to mutual funds) and pension funds—have taken a greater interest in proxy voting and developed in-house expertise to address proxy vote-related issues. The institutional investors and investor association representatives also pointed to the growing trend among institutional investors of using their own voting policies as a basis for voting decisions instead of relying on the proxy advisory firms’ policies and vote recommendations. For example, officials from the four large institutional investors told us that they conduct their own research and analyses to make voting decisions and use the research of proxy advisory firms only to supplement their internal research and analyses. Officials from one proxy advisory firm also told us that while firms provide vote recommendations, it is the institutional investor that makes the actual vote decision, which is most often based on the institutional investor’s own voting policies. Moreover, they noted that as clients of the proxy advisory firm, institutional investors always retain the ability to change the vote that the proxy advisory firm casts on their behalf. According to large institutional investors and a few investor association representatives that we spoke to, some smaller institutional investors who do not have their own in-house research staffs to analyze the many proxy voting issues and companies in their portfolio will obtain such services from proxy advisory firms and rely more on the research and recommendations proposed by the firms. In these cases, the resulting vote recommendation could have more of an influence on the voting, because some of these smaller institutional investors have a tendency to adopt the firms’ recommendations and vote accordingly. One small institutional investor told us that it relies on the research and the vote recommendations of ISS and will consider the firm’s recommendations on certain actions before making voting decisions. Other studies that we reviewed showed that proxy advisory firms also have an influence on corporate governance practices. For example, a 2015 study found that to avoid a negative vote recommendation, companies changed their compensation programs before the formal shareholder vote in a manner consistent with the features known to be favored by proxy advisory firms. A 2013 study also found that more than half of companies involved in the study responded to a shareholder vote triggered by a negative recommendation from the proxy advisory firms by making changes to their compensation plan. In addition, a 2012 study found that more than two-thirds of U.S. companies say their executive compensation program is influenced by the policies and voting recommendations of the two largest proxy advisory firms—ISS and Glass Lewis. In particular, a majority of companies say they are likely to make changes to their compensation program to gain a favorable “say-on-pay” recommendation from these firms. Two corporate issuers also told us that proxy advisory firms have some influence on the development of their governance practices and they would generally accept the firms’ advice on corporate governance requirements. Officials from one proxy advisory firm with whom we spoke stated that they agree that proxy advisory firms have influence on corporate governance practices. The proxy advisory firm further indicated that its policy frameworks reflect its institutional investor clients’ preferences for better disclosure, strong shareholders’ rights, and adoption of best practices governance standards. They noted that such influence is good and ultimately they want to have a positive influence on their clients because they view that as part of their responsibility—to promote good governance. Similar to the views expressed by the officials of the proxy advisory firm, investor association representatives also suggested that consideration be given to the context in which influence is often viewed. They noted that most often, influence is viewed negatively. However, the representatives said that proxy advisory firms’ influence can be positive. That is, if the recommendations proxy advisory firms make help to promote good governance, then the firms’ influence on voting is beneficial to shareholders. Additionally, a 2009 study found that proxy advisory firm recommendations—at least for uncontested director elections—appeared to be based on factors that should matter to institutional investors, such as good governance, director attention, and performance. Proxy advisory firms develop their general voting policies and update them through an iterative process involving analysis of institutional investor and corporate issuer input, industry practices, and discussions with other stakeholders. These policies are similar to or in some cases stricter than other standards such as those from the New York Stock Exchange (NYSE) and the NASDAQ Stock Market (NASDAQ). Proxy advisory firms have taken steps to communicate with corporate issuers when developing voting recommendations and have allowed some to review proxy reports for accuracy before they are final. While some corporate issuers said they still do not understand the bases for some vote recommendations and would like to have a dialogue about the proxy reports, proxy advisory firms said that to maintain objectivity and satisfy research reporting timelines for clients they have to limit the breadth of such discussions. Proxy advisory firms’ voting policies outline their approaches for evaluating positions on, and rationales for, recommendations on corporate governance issues. For example, ISS and Glass Lewis officials said they develop three types of policies: general, specialized, and client- customized. General policies reflect the firm’s own positions and rationales on various corporate governance issues and are generally used in developing their vote recommendations. The policies may take into account national and international corporate governance codes and practices, as well as the views of institutional investors, corporate issuers, and other stakeholders. Specialized policies reflect the institutional investor clients’ perspective on specific governance issues such as sustainability, socially responsible investing, public funds, labor unions, or mission and faith-based investing. Since these policies reflect specific institutional investor perspectives or needs of different institutional investors, voting recommendations developed under these policies may in some cases differ from recommendations formed under general policies. Client customized policies are based on institutional investor clients’ unique corporate governance guidelines, and reflect each investor’s specific philosophies and approaches. For these clients, the proxy advisory firm prepares voting recommendations based on these policies. As a result, the vote recommendations issues under these policies may differ from those issued under general policies. Since specialized and client customized policies reflect different perspectives of different institutional investors, voting recommendations developed under these policies in some cases may differ from recommendations formed under general polices. The following discussion focuses on general policies, which represent the general guidelines the firms use for their analyses in developing vote recommendations. According to the two largest proxy advisory firms—ISS and Glass Lewis—they develop their general voting policies and update them through an iterative process, which recently has included increased engagement with institutional investors, corporate issuers, and other stakeholders. ISS and Glass Lewis have taken steps to obtain input from and communicate with market participants about voting policies. Some corporate issuers we interviewed said that both ISS and Glass Lewis recently have made more of an effort to engage market participants in the general policy development process unlike in the past when their outreach was less frequent or formal. When we spoke to both proxy advisory firms, they also said that they made their processes more transparent than they were in the past. For example, they have begun to conduct engagement meetings, hold roundtables, and post guidelines to their websites. Specifically, Glass Lewis officials said they have created a corporate issuer resource website that offers links to its guidance documents, forms to request engagement meetings, and responses to frequently asked questions. ISS officials said they invite institutional investors, corporate issuers’ management and board directors, and other industry stakeholders to participate in its annual proxy voting policy survey. According to ISS, the survey is designed to provide input on key issues that are factored into the development of ISS’s general policy guidelines, including proposed policy updates as well as new policies. See figure 3 for examples of the types of communication mechanisms used. A few corporate issuers told us that although input is obtained from both corporate issuers and institutional investors, it does not necessarily make its way into the final general policy guidelines. One corporate issuer we interviewed said there has been a noticeable increase in outreach (a lack of outreach was evident in the past). But the corporate issuer also said there is a difference between proxy advisory firms soliciting input and using input to modify policies. Another corporate issuer, who said it was not solicited for feedback, said it seemed like policies were sometimes developed in a vacuum. However, Glass Lewis officials said that they have responded to issuer feedback, for example, Glass Lewis changed its approach for selectin issuer peer groups used in its pay for performance analysis. Also, Glass Lewis officials said that they work with an independent advisory council that provides guidance in the development and updating of its voting policies. Further, some have raised concerns about ISS’s policy survey and published results. For example, one market participant we interviewed said that a relatively small number of institutional investors drive ISS’s policy formation process in part because a small number of ISS investor clients participated in the survey. In a February 2013 working paper, the authors also noted that ISS’s policy survey relied on a small number of participants and provided little detail about the composition of the respondent pool. ISS officials said there has been consistency in the relative mix of institutional investors and corporate issuers responding to the survey, with more corporate issuers than institutional investors answering the survey questions. Based on our review of selected general voting policies of proxy advisory firms and other market standards on corporate governance, the firms’ policies were similar to or in some cases stricter than the other standards and covered a broader range of issues. We reviewed selected policies from the five proxy advisory firms, NYSE, NASDAQ, and a large institutional investor, and looked specifically at the issues of director independence, overboarding (number of public company boards for which a director can serve before being considered overextended), independent chairman/chief executive officer (CEO), and proxy access, as illustrated in the following examples: Board independence. Proxy advisory firms and the exchanges (NYSE and NASDAQ) require some level of independence on corporate boards. Specifically, both exchange listing requirements and firm voting policies call for a majority of independent board directors on corporate boards. However, these bodies vary on the “look-back” period required for directors to be deemed independent from the company. The five proxy advisory firms and one institutional investor policy that we reviewed require a 5-year look back, while the exchanges require 3 years. One proxy advisory firm’s rationale for this difference was that 5 years allows enough time for management and board members to settle any conflicts of interest. This firm also notes that it does not automatically apply the 5-year threshold and will consider the type of relationship the nominee has with the company. Overboarding. Proxy advisory firms and some institutional investors have policies on overboarding, but the exchanges do not. In 2016, both ISS and Glass Lewis updated their director overboarding policies to reflect concerns about directors overcommitting themselves. Specifically, a few institutional investors expressed the position that if directors served on too many boards, they would not have sufficient time to focus on the issues related to any one company. The institutional investor policy we reviewed—which had a lower threshold than that of the proxy advisory firms—explained that generally it is unlikely that a director will be able to commit sufficient focus to a particular company when a director commits himself or herself to a large number of boards. Both ISS and Glass Lewis’s policies outline a phased transition to a lower board membership threshold for directors. ISS policy states, for example, that it will recommend that shareholders vote against directors who sit on more than six boards, but beginning in 2017, ISS policy states that it plans to make negative recommendations for directors sitting on more than five. Glass Lewis policy also states that it plans to note a concern for these directors in its report, thus providing a transition period before putting the full policy into effect. The current policy cites six boards, but in 2017 Glass Lewis’ policy also recommends voting against a director who serves on more than five boards. Further, a couple of the firms have changed their policy on the number of boards that a CEO should serve on. For example, in 2016, Egan-Jones changed its overboarding policy limiting the number of outside boards a CEO may serve on to one. Glass Lewis plans to make a similar adjustment in 2017. Glass Lewis policy states that during the 2016 proxy season, it plans to note as a concern CEOs serving on more than one outside boards, and then beginning in 2017 it will base its recommendation on this lower threshold. ISS policy recommends a vote against CEOs who sit on more than two outside boards. Independent chairman/CEO. The issue of an independent chairman/CEO is another example of an issue area covered by the proxy advisory firms’ and large institutional investor’s policy, but not addressed by the exchange listing requirements. Specifically, all five proxy advisory firms have independent chairman/CEO policies. One firm said the development of this policy was guided by feedback from institutional investor clients. Similar to the five proxy advisory firms, the large institutional investor policy we reviewed generally supports the separation of chairman and CEO when a company does not have a lead independent director. The institutional investor policy states that support for independent leadership is important given the roles that the chairman plays, such as contributing to oversight of CEO succession planning and serving as an advisor to the CEO. Proxy access. The issue of proxy access is another area not covered by the exchange listing standards, but addressed by the proxy advisory firm and institutional investor policies. Specifically, the five proxy advisory firms have a proxy access policy. According to market participants, the increased rise of shareholder activism also saw increased attention on the issue of proxy access. One market participant we interviewed said that proxy advisory firm policies have become more complex and nuanced, and the firms have enhanced policies on proxy access as the issues have received more attention. Similarly, the institutional investor policy we reviewed supports proxy access, stating that long-term shareholders should have the opportunity to nominate directors. Market participants with whom we spoke generally viewed proxy advisory firms’ policies on corporate governance as stricter than other industry standards but reflective of institutional investors’ interests. Specifically, for select corporate governance issues, proxy advisory firm policies may call for higher standards of compliance than other industry standards, such as exchange listing requirements. Some market participants said that these stricter standards are a reflection of the higher standards for which some investors look and that in their view help promote better governance practices. They stated that exchange listing standards tend to only serve as a baseline for publicly traded companies. A few institutional investors pointed out that their policies require even higher standards of compliance than the proxy advisory firms have developed. For example, representatives of one institutional investor told us that their company’s overboarding policy is stricter than both ISS’s and Glass Lewis’s policies. The officials added that the issue of overboarding is a case in which institutional investors were ahead of the marketplace and proxy advisory firms were just now “catching up.” Proxy advisory firms’ approaches for developing vote recommendations can be case-by-case or rules based. Policy application may depend on factors such as the type of vote cast or the voting instructions provided by institutional investor clients. A more rules-based approach might be applied with some board of director issues such as board independence, which uses a time period threshold to ensure that directors with previous work history with a company have been separated long enough to be independent. However, such issues may still be subject to a case-by-case review. For example when applying the look-back period for director independence, Glass Lewis’s proxy policy states that it will not automatically recommend voting against former executives of a company who have consulting agreements with the company during the look-back period. In contrast, vote recommendations on mergers and acquisitions would always be applied on a case-by-case approach that considered the facts and circumstances of the companies involved. The proxy advisory firms state in their respective general policies that they consider the benefit that implementation of a proposal would have on shareholders of the company being evaluated. For example, in proxy reports we reviewed of a merger, both ISS and Glass Lewis evaluated the potential benefits of the merger to investors on both sides of the proposed transaction. Both ISS and Glass Lewis found that investors for one company would benefit and thus recommended in favor of the merger for investors of that company, but recommended against the merger for investors of the other company because it would not be to their benefit. In conducting evaluations such as these, ISS and Glass Lewis officials as well as some corporate issuers we interviewed also said that the firms consider new and company-specific information. For example, in 2015 reports on this merger, ISS made adjustments to its original reports to account for company-specific information that clarified two data points, adjusting the estimated fair value of one of the companies. The updates were included in the reports and clients were notified through an alert or note—a process the proxy advisory firms use when they have updated or revised information in their reports. Proxy advisory firm officials also pointed out that while analysts have the discretion to engage with clients as well as with some corporate issuers during each proxy season, the firms only consider new or company-specific information that is publicly available to help ensure their reports and recommendations are based on the same information available to clients and the broader investing public. Both Glass Lewis and ISS officials acknowledged that corporate issuers expressed an interest in reviewing proxy reports for accuracy in advance of proxy meetings. In addition, international regulatory organizations, such as European Securities and Markets Authority and Canadian Securities Administrators, have promoted increased engagement and transparency between corporate issuers and proxy advisory firms. Therefore, the proxy advisory firms have developed specific procedures that corporate issuers or their representatives may use to review or report errors related to the proxy reports prepared by the firms (see fig. 4). Specifically, Glass Lewis developed a new process in 2015 by which companies can receive a draft data-only version of a report for review before the firm completes its analysis. These data-only versions do not contain the firm’s recommendations. Companies interested in receiving a report must submit a request. Corporate issuers are given a 48-hour window to review the draft and provide corrections. ISS offers a similar opportunity to Standard and Poor’s 500 companies and to companies in comparable large capitalization indices in some countries outside the United States. However, unlike the data-only versions of the reports provided by Glass Lewis, these reports contain ISS’s analyses and vote recommendations. Other corporate issuers have the option of requesting a copy of the published report in advance of the company’s annual meeting. Standard and Poor’s 500 companies have the opportunity to review ISS’s draft reports and provide feedback within 1-2 business days. One stakeholder we interviewed said that this time window did not always allow corporate issuers enough time to review. However, Glass Lewis and ISS officials indicated that these time windows allow them to meet their report publishing deadlines. In addition to the draft review process, ISS officials said ISS has a Feedback Review Board that provides a mechanism for stakeholders to communicate with ISS throughout the year regarding the accuracy of data, research, and general fairness of policies. ISS and Glass Lewis documents state that the opportunity to review advance copies of each company’s specific report is only an opportunity to check data for factual errors and not a mechanism for conveying disagreement with ISS’s or Glass Lewis’s methodologies or analyses. Some corporate issuers stated that there are differences of opinion, conflicting points of view, and misinterpretations of the data. However, ISS documentation indicated that although the review process allows for a verification of data, it has to limit the breadth of the review because it adds operational complexity and significant time to the research production process. Glass Lewis policy states that during proxy season it has to limit discussions on its policies or recommendations to help it remain objective. However, Glass Lewis officials said that it engages with issuers extensively outside of proxy season on issuer-specific issues including specific recommendation as well as general policies. Both corporate issuers and institutional investors we interviewed said that the data errors they found in the proxy reports were mostly minor, but as we discuss below, some errors can lead to negative recommendations. Some issuers raised other concerns regarding how policies were applied during recommendation development and that the approaches used did not always account for differences across corporate issuers. For example, ISS’s and Glass Lewis’s general compensation policies lay out a set of criteria they use in evaluating an executive compensation package. Corporate issuers we interviewed expressed concern that firms applied these policies in a one-size-fits-all or rules-based manner. A few corporate issuers said they had to initiate outreach to the firms to explain the corporate issuers’ unique circumstances before the recommendations were reversed. Corporate issuers with whom we spoke pointed to another example of one-size-fits-all application involving overboarding policies. As mentioned earlier, ISS and Glass Lewis general policies provide a threshold (number) for public company boards on which a director can serve before being considered overextended. One small corporate issuer we interviewed said it was unsuccessful in trying to make a case for keeping a highly qualified director who contributed needed expertise but was deemed overboarded. Given the company’s small size, representatives found it very important to have this individual on its board. Although a few corporate issuers with whom we spoke were frustrated that consideration has not been given for special circumstances or the effect the decision would have on the company, one proxy advisory firm’s policy refers to institutional investor concerns about directors being overextended. As previously discussed, a 2013 study found limited evidence of a one-size-fits-all approach in the context of mandatory say- on-pay. The study found that proxy advisory firms take into consideration mitigating company-specific circumstances, severity of the issue, the firm’s rationale, and the overall quality of the compensation plan when policies were applied during recommendation development. Furthermore, some corporate issuers and stakeholders would like further insight into how the proxy advisory firms arrived at their vote recommendations. A few stakeholders also told us they hire consultants with expertise on executive compensation and have developed models similar to those used by proxy advisory firms to help them better understand how firms produce their results and recommendations. To further increase transparency into the proxy advisory firm vote recommendation process, stakeholders have proposed making the reports available to the public at some time after the annual meeting. Market participants and other stakeholders told us there are advantages and disadvantages to making proxy advisory firm reports public at an appropriate time. For example, some market participants said a possible advantage to making the reports public is that it would allow for greater scrutiny and the ability to further evaluate the validity of proxy firm recommendations and whether the recommendations have a positive effect on shareholder value. But several stakeholders agreed that making them public would negatively affect proxy advisory firms’ ability to be profitable. Proxy advisory firms did not support the idea of making their reports publicly available at no cost after the relevant shareholder meeting because it would undermine their business model. They noted that their clients use these reports throughout the year and not just as a basis for voting proxies. Since 2007, SEC oversight of proxy advisory firms and the services they provide has included information gathering on issues relating to the firms, issuance of guidance, and examinations of firms registered as investment advisors and of registered investment companies or investment advisers using proxy advisory services (see fig. 5). Concept release. Since our last report in 2007, SEC sought public comment on concerns that had been raised by stakeholders in the proxy advisory industry in its 2010 Concept Release on the U.S. Proxy System. According to SEC staff, the agency occasionally publishes concept releases to raise awareness and collect the public’s view on certain securities issues so the agency can better evaluate the need for future rulemaking. The 2010 concept release discusses, among other things, concerns that had been raised by corporate issuers and industry participants about the level of accuracy and transparency in how proxy advisory firms formulate voting recommendations and potential conflicts of interest. Concerns related to accuracy and transparency include that firms’ voting recommendations may be based on inaccurate or incomplete data. Additionally, the 2010 concept release reiterated what we reported on in 2007, that a conflict of interest for a proxy advisory firm could arise if it provided both proxy voting recommendations to institutional investors and consulting services to companies on the same matter. And as we reported in 2007, the most commonly cited potential for conflict of interest involved ISS; specifically, that ISS advises institutional investors on how to vote proxies and provides consulting services through its subsidiary, ISS Corporate Solutions, Inc., to companies seeking to improve their corporate governance. The concept release also discussed other types of potential conflicts of interest on which we reported in 2007, such as when owners or executives of the proxy advisory firm have significant ownership interest in, or serve on the board of directors of companies (corporate issuers) with matters being put to shareholder vote and on which the proxy advisory firm is offering vote recommendations. The concept release also requested public comments on a list of potential regulatory solutions for addressing conflicts of interest and accuracy and transparency issues. For example, SEC asked for comments about revising interpretive guidance or regulations to require more specific disclosure of the presence of a potential conflict and the extent of controls and procedures ensuring the accuracy of proxy research reports provided to institutional investor clients. SEC received about 300 comment letters on these and other issues discussed in the release. SEC staff stated these comment letters helped to inform subsequent work on proxy advisory firms (as discussed below). Furthermore, SEC staff stated that they continue to routinely review issues raised in the concept release, and have met with several stakeholders and associations representing corporate issuers, investors, and proxy advisory firms to see if the issues are still prevalent and plan to continue these discussions with various stakeholders. Roundtable. In December 2013, SEC held a roundtable to discuss issues facing the proxy advisory industry. Participants included the SEC Chair as well as four SEC Commissioners and various officials and representatives from institutional investors, investment advisers, corporate issuers, academia, law firms, and proxy advisory firms. According to statements by the Chair, the roundtable continued the review of the use of proxy advisory services and related issues that were discussed in the 2010 concept release. The roundtable discussed the use of proxy advisory firms in general and also reviewed key topics of interest, including potential conflicts of interest for proxy advisory firms and users of their services, the transparency and accuracy of the recommendations the firms make, and what the nature and extent of institutional investor reliance on proxy advisor recommendations is and should be. The Chair stated she was particularly interested in the discussion of potential conflicts of interest. One Commissioner also drew attention to these issues in a number of speeches in 2013 and 2014. Guidance. SEC staff addressed some of the issues discussed above through guidance. After the concept release and the roundtable, SEC staff took steps to address issues in the proxy system in a 2014 Staff Legal Bulletin. SEC staff stated the bulletin summarized the staff’s views on laws and SEC regulations related to proxy advisory firms. For example, SEC staff provided guidance that spelled out various responsibilities for disclosure of conflicts of interest. The guidance made it clear that proxy advisory firms must provide notice of the presence of a significant relationship or a material interest. In addition, according to the Staff Legal Bulletin, such disclosure should enable the recipient to understand the nature and scope of the relationship or interest, including the steps taken, if any, to mitigate the conflict. The disclosure should also provide sufficient information to allow the recipient to make an assessment about the reliability or objectivity of the recommendation. Additionally, the bulletin clarified and restated responsibilities of investment advisers to demonstrate that proxy votes are cast in accordance with clients’ best interests and the adviser’s proxy voting procedures. Among other things, the guidance states that investment advisers who use proxy advisory firms should ascertain whether the proxy advisory firm has the capacity and competency to adequately analyze proxy issues. In doing so, the guidance states that investment advisers could consider, among other things, the robustness of the proxy advisory firm’s policies and procedures regarding its ability to ensure that proxy voting recommendations are based on current and accurate information and to identify and address any conflicts of interest. The Staff Legal Bulletin further states that investment advisers who use the services of proxy advisory firms could also consider the adequacy and quality of the firm’s staffing and personnel. Institutional investors with whom we spoke told us they perform due diligence on proxy advisory firms in various ways. A few institutional investors reported conducting various types of compliance reviews of firms, including site visits and analyst interviews. For example, one institutional investor has analysts dedicated to conducting ongoing due diligence on the data quality of the proxy advisory firm’s reports. This institutional investor validates the firm’s data and communicates any errors it identifies to the firm. The institutional investor said that the errors found in proxy reports generally were minor and that firms typically were able to update and correct their reports. Examinations. SEC staff also considered some of the issues discussed previously through examinations of proxy advisory firms registered as investment advisers and registered investment companies using proxy advisory firms. As discussed, proxy advisory firms that are registered investment advisers under the Advisers Act are subject to examination by SEC. According to SEC staff, proxy voting issues and proxy advisory firms may not be examined on a regularly scheduled basis because SEC uses a risk-based approach to identifying examination priorities each year. As noted previously, all entities, including proxy advisory firms, that meet the statutory definition of an investment adviser (where no exclusion from the definition is available), regardless of whether they are registered with SEC, are subject to the Advisers Act’s antifraud provisions. Legislation that has been proposed would require all proxy advisory firms to register as such, creating a new regulatory framework for the registration of proxy advisory firms. In January 2015, SEC staff announced examination priorities for 2015, which included select proxy advisory firms and how they make recommendations on proxy voting and how they disclose and mitigate potential conflicts of interest. The examination priorities for 2015 also included reviewing investment advisers’ compliance with their fiduciary duty in voting proxies on behalf of investors. SEC staff efforts on this priority were incorporated into an ongoing Never-Before-Examined Investment Company Initiative that launched in April 2015. This initiative involves focused, risk-based examinations in a number of higher-risk areas, including compliance programs. SEC staff announced that as one of the areas to be reviewed within the compliance program, it would review investment companies’ portfolio proxy voting policies and procedures. The examination focus would include the oversight of a proxy advisory firm retained by the investment company’s investment adviser, if applicable. In determining examination priorities through a risk-based approach, SEC staff told us that the decision to examine this issue for this initiative was based on several factors, including the higher risk that these investment companies may have weaker internal controls, including procedures for overseeing proxy advisory services. As of August 2016, the initiative is ongoing. We reviewed 41 percent of the examinations completed as of August 2016 on SEC’s 2015 priorities addressing proxy advisory firm issues and confirmed that SEC examined risk areas related to conflict of interest, proxy voting policies and procedures, and oversight of proxy advisory services, among other issues. None of the examinations we reviewed resulted in serious violations leading to an enforcement action. SEC staff stated they may refer to the scope, process, or relevant legal resources used in the initiative for examinations that review portfolio securities proxy voting in the future, although as of August 2016 none were planned. As clarified in the Staff Legal Bulletin, due diligence obligations over proxy advisory firms on a regular basis falls predominately on the investment adviser using their services. Therefore, regardless of persisting perceptions of issues with proxy advisory firms as discussed above, it is the investment adviser’s responsibility to vote the proxy in its clients’ best interest. We provided a draft of this report to the Securities and Exchange Commission for review and comment. We also provided excerpts of the report to proxy advisory firms for technical comment. SEC staff as well as officials from each proxy advisory firm provided technical comments, which we have included, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Chair of SEC, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or clementsm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. Key contributors are listed in appendix II. This report discusses (1) the demand for proxy advisory services and the extent to which firms may influence proxy voting and corporate governance practices, (2) how proxy advisory firms develop and apply voting policies to make vote recommendations and efforts to increase transparency, and (3) Securities and Exchange Commission’s (SEC) oversight since 2007 related to proxy advisory firms and the services they provide. To address all objectives, we conducted a literature review to obtain background information and identify issues related to proxy advisory firms. We used Internet search techniques and keyword search terms to identify publicly available information about proxy advisory firms, from 2008 – 2016, including the history, number of firms in the United States, types of proxy advisory services, and past or current issues facing the industry. From research databases such as ProQuest and LexisNexis, we obtained information from publicly available documents, such as journals, trade publications, periodicals, studies, white papers, and congressional testimony. We also identified and conducted interviews with various officials and representatives with knowledge of the industry (SEC staff, 5 proxy advisory firms, 13 institutional investors, 11 corporate issuers, 4 proxy solicitation firms, 2 international agencies—European Securities Markets Authority and Canadian Securities Administrators—and 8 industry and advocacy groups). The industry and advocacy groups were the Business Roundtable, Chamber of Commerce’s Center for Capital Markets Competitiveness, Council of Institutional Investors, Investment Company Institute, Mutual Fund Director’s Forum, National Association of Corporate Directors, National Investor Relations Institute, and the Society of Corporate Secretaries and Governance Professionals. We also interviewed other stakeholders from the Stanford Rock Center for Corporate Governance, and the NASDAQ Stock Market and New York Stock Exchange. We conducted the interviews to gain an understanding of issues affecting the proxy advisory industry and to obtain a variety of perspectives, as well as to corroborate the information obtained in our other sources. The views of those interviewed are not representative of all institutional investors, corporate issuers, proxy solicitors, or industry and advocacy groups. Our criteria for selecting the interviewees consisted of several factors such as participation in prior SEC events, including roundtables; recommendations from market participants and other stakeholders; participation in prior congressional hearings; appearance in our literature reviews and Internet searches; and mentions in bibliographies of relevant papers and studies. In selecting corporate issuers (public companies that develop, register, and sell securities to the investing public to finance their operations), we used information from the Standard and Poor’s Smallcap 600, Midcap 400, and Large 500 indexes to randomly select a mix of small, midsize, and large corporate issuers. In selecting institutional investors for our interviews, we obtained information from the Council for Institutional Investors and the Investment Company Institute to judgmentally select a mix of 13 institutional investors (based on asset size) and type (mutual fund companies and pension funds). We based the asset size of institutional investors on the total assets under management (AUM), or the total market value of all financial assets the institution manages for its clients or on its own behalf. To ensure a mix of large and small institutional investors, we ranked institutional investors by the total reported AUM and selected seven institutions with the highest total AUM and six institutions with the lowest total AUM. For purposes of this report, we defined “large” institutional investors as those with an AUM of $600 billion or more and “small” institutional investors as those with an AUM of $200 billion or less. Throughout this report, we use certain qualifiers when describing results from interview participants, such as “few,” “some,” and “most.” We define few as a small number but less than some (two or three); some as more than a few relative to the total number possible (at least four or more); and most as nearly all or almost everyone relative to the total number possible (at least seven or more). To address the first objective, we reviewed and summarized literature and analyzed available information on users of proxy advisory firms and the demand for proxy advisory services, factors that may have contributed to demand, and the possible influence of firms on proxy voting and corporate governance practices. Specifically, to describe the demand for services, we identified the services provided by proxy advisory firms, users of such services, and the rationale, if any, for institutional investors, in particular, to acquire proxy advisory services. To the extent that relevant data or literature were available, we summarized information on any trends, linkages, or relationships identified in the literature. Additionally, to address the first objective, we conducted a literature search to identify relevant academic studies and working papers on the influence of proxy advisory firms. Our criteria for selection consisted of factors such as whether the studies and papers were based on original data analysis (including data that may have been gathered by others); published in a refereed medium; written or published in 2009–2016; and contained no serious methodological or other errors (as determined by our quality assessment and based on guidance for using external work in our engagements).We focused our analysis on published academic studies and academic working papers not yet published that involved quantitative analyses of proxy advisory firms’ influence. We analyzed the content of these studies and papers for data or other information on the extent of the firms’ influence. We reviewed whether the author concluded that the proxy advisory firms’ research and recommendations moved at least some fraction of the votes or affected a company’s governance decisions or practices. We also reviewed whether the author concluded that the firms’ influence was positive or negative in the sense that it was potentially helpful or harmful to shareholders or investors. For the second objective, we identified and analyzed available information on how proxy advisory firms develop and apply voting policies to make vote recommendations. We analyzed information on the firms’ voting policies and guidelines, such as their general, custom, and specialty policies. In some instances, we focused our review on Institutional Shareholder Services (ISS) and Glass Lewis and Co. (Glass Lewis) because they have the largest number of clients in the proxy advisory firm market in the United States. We reviewed documentation issued by the SEC and its staff and international regulators such as the European Securities and Markets Authority and Canadian Securities Administrators proposing principles and guidelines related to proxy advisory firm transparency. In addition, we reviewed proxy advisory firm policies, mechanisms, and the transparency of their voting policies, procedures, and processes, including reviewing the firms’ websites and whether they disclosed information about their policies and processes. We also analyzed the views of market participants and other stakeholders on these transparency efforts. We also compared proxy advisory firms’ policies for selected voting issues with related corporate governance standards developed by other entities, such as stock exchanges and institutional investors. Specifically, we reviewed four different voting policies from the five proxy advisory firms and compared them with corporate governance standards developed by the New York Stock Exchange (NYSE), NASDAQ Stock Market, and one large institutional investor. We selected NYSE and NASDAQ because they have corporate governance requirements that corporate issuers must meet to be listed on the exchange and some of these requirements are also addressed by proxy advisory firms. We also selected a large institutional investor that has developed its own voting policies on corporate governance issues to provide an example of how proxy advisory firm policies compare to voting policies of institutional investors. We reviewed voting policies and corporate governance requirements for director independence, overboarding, independent chairman/chief executive officer, and proxy access issues. We selected these four topics based on what we learned from interviews with market participants and other stakeholders and our literature review. Although some of the proxy advisory firms have voting policies for different countries, we focused on the proxy voting policies for the United States. Lastly, for the second objective, we analyzed the policies firms use in developing vote recommendations and identified different proxy voting issues to illustrate the process. To select voting issues, we made a judgmental selection of voting events occurring after the issuance of the June 2014 SEC Staff Legal Bulletin on proxy voting and during the 2015 proxy season. We selected events that were either discussed in our interviews with market participants or other stakeholders or publicly in the news media. The example events covered the areas of (1) board of directors’ issues, (2) mergers and acquisitions, and (3) executive compensation. We also reviewed available information on the steps conducted to ensure that data used for developing vote recommendations are accurate and looked at the degree of communication between proxy advisory firms and corporate issuers before vote recommendations are finalized. Specifically, we reviewed ISS’s and Glass Lewis’s draft review processes and analyzed the views of market participants who have been involved with the processes. For the third objective, we reviewed and summarized SEC oversight activities since our last report in 2007 regarding proxy advisory firms and their clients. We reviewed the SEC 2010 Concept Release on the U.S. Proxy System related to proxy advisory firms and comment letters industry stakeholders submitted to SEC on the concept release. We reviewed the transcript and comments on a roundtable SEC held about the proxy advisory industry in 2013. We also reviewed the guidance and clarification provided in the 2014 Staff Legal Bulletin of the obligations of proxy advisory firms and their clients who are registered as investment advisers. To determine whether SEC addressed 2015 examination priorities related to proxy advisory firms registered as investment advisers and the services they provide to registered investment companies, we reviewed 41 percent of the examinations related to SEC’s 2015 priorities addressing proxy advisory firm and proxy voting issues completed as of August 2016. We conducted this performance audit from August 2015 to November 2016, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the above contact, Kay Kuhlman (Assistant Director), Michelle Bowsky (Analyst-in-Charge), Ria Bailey-Galvis, William Chatlos, Risto Laboski, Patricia Moye, Aku Pappoe, Barbara Roesmann, Jena Sinkfield and Anne Stevens made key contributions to this report.
As institutional investment has grown over the last 30 years, institutional investors increasingly have relied on proxy advisory firms. The proxy advisory industry in the United States consists of five firms—two of which are the largest and most dominant proxy advisory firms. Some members of Congress, industry associations, and academics have raised issues about proxy advisory firms' influence on voting and corporate governance, the level of transparency in their methods, and the level of regulatory oversight. GAO was asked to review the current state of the proxy advisory industry. This report discusses (1) the influence proxy advisory firms may have on voting and corporate governance, (2) how firms develop and apply policies to make vote recommendations, and (3) SEC's oversight activities. GAO reviewed literature; analyzed the proxy advisory firms' policies and SEC policies and examinations; and interviewed the 5 proxy advisory firms, 13 institutional investors, 11 corporate issuers, SEC officials, and industry stakeholders. GAO randomly selected corporate issuers from Standard and Poor's indexes and judgmentally selected institutional investors (based on size and type of investor) from industry associations' information. GAO makes no recommendations in this report. GAO provided a draft to SEC for its review and received technical comments, which were incorporated as appropriate. Institutional investors, such as pension plans and mutual funds, hire proxy advisory firms to obtain research and vote recommendations on issues, such as executive compensation and proposed mergers that are addressed at shareholder meetings of public corporations (corporate issuers). Market participants and other stakeholders with whom GAO spoke agreed that with the increased demand for their services, proxy advisory firms' influence on shareholder voting and corporate governance practices has increased. But recent studies, market participants, and stakeholders had mixed views about the extent of the influence. For example, some said influence can vary based on institutional investor size (there is less influence on large institutional investors that often perform research in-house and have their own voting policies). Proxy advisory firms, specifically Institutional Shareholder Services and Glass Lewis & Company—the two largest firms—develop and update their general voting policies through an iterative process, involving analysis of regulatory requirements, industry practices, and discussions with market participants. Corporate issuers and institutional investors told GAO that unlike in the past, the firms have made more of an effort to engage market participants in the development and updating of voting policies, such as criteria for assessing the independence of board directors and executive compensation packages. According to the firms, they apply these general voting policies to publicly available company information to develop vote recommendations, which also are based on institutional investor voting instructions and criteria that firm analysts determine are applicable to the issue being voted on. Firms have taken steps to communicate with corporate issuers and allow review of data used to make vote recommendations before they are finalized. However, some corporate issuers told GAO that firms continue to apply policies in a one-size-fits-all manner, which can lead to recommendations not in the best interest of shareholders. Corporate issuers also stated that they often do not understand the rationale for some vote recommendations and would like to discuss them before they are finalized. Proxy advisory firms told GAO that to maintain objectivity and satisfy research reporting timelines for clients, they limit the breadth of such discussions. Securities and Exchange Commission (SEC) oversight of proxy advisory firms and the services they provide has included gathering information, issuing guidance, and examining proxy advisory firms and use of the firms by investment companies, such as mutual funds. In 2010, SEC summarized concerns that market participants raised about conflicts of interest, accuracy, and transparency of proxy advisory firms and requested comments on potential regulatory solutions. In December 2013, SEC held a roundtable to discuss issues facing the proxy advisory industry, and issued guidance in June 2014 on disclosure of conflicts of interest, among other things. According to SEC, it also has continued to address concerns surrounding proxy advisory firms through its examinations of investment advisers and investment companies that retain their services. SEC made these examinations a priority in 2015 and an area of focus in its ongoing initiative for registered investment companies that had not been examined by SEC.
Japan’s highly segmented banking industry is made up of separate groups of institutions engaged in short-term or long-term finance; trust activities; foreign exchange, and trade financing; small business finance; and regional and agricultural finance. The segmentation of the banking industry reflects the extensive restructuring the Japanese economy underwent in the aftermath of World War II. At that time, as a key part of Japan’s efforts to promote rapid industrial recovery, the government instituted legal reforms that created pronounced specialization in banking that persists to some degree to the present day. Although the Japanese banking industry remains segmented and specialized, deregulation and liberalization since the 1970s have eliminated many functional distinctions among the different types of banks and the separate, specialized markets they formerly served. Bank regulation and supervision are the responsibility of the central government, although some financial institutions are under the jurisdiction of local governments. The current Japanese banking system had its inception during the late 19th century with the emergence of a commercial banking system that was dominated by a small number of banks associated with major industrial conglomerates. After World War II, the Japanese government’s efforts to rebuild the economy led to the dismantling of these prewar conglomerates and to restrictions on universal banking powers that were formerly allowed to banks. As part of the nation’s postwar economic and industrial recovery reforms, the Japanese government restricted banks from engaging in activities outside of banking, such as securities activities, and it limited their ownership of shares in other Japanese companies, particularly industrial companies. The result was a segmented banking structure, which even today retains some of its highly specialized character. Japanese banks currently may accept deposits or installment savings, lend money, conduct exchange transactions, and certain ancillary activities. Allowable ancillary activities include purchasing, lending, and selling securities; underwriting government bonds; and the safekeeping of securities and precious metals. In addition, through associated companies, banks can provide venture capital, consulting services, leasing, housing finance, and loans. Industry deregulation initiated in the early 1980s that culminated with the 1992 Financial System Reform Law also now allows banks to compete in securities underwriting activities through subsidiaries, albeit with certain restrictions on those activities. In addition, there is to be a clear separation of banking and securities activities. Banks, however, are currently prohibited from participating in insurance activities and from setting up holding companies. The structure of the Japanese banking system is made up of five types of specialized financial institutions: commercial banks, which are referred to as ordinary banks; long-term financial institutions; financial institutions for small business; financial institutions for agriculture, forestry, and fisheries; and public financial institutions, which include a postal savings system that is a major source of funds for the Japanese government. (See app. I.) Ordinary banks in Japan include city banks, regional banks, and branches of foreign-owned banks. They offer a variety of products and services including deposit-taking, fund transfers, and short- to long-term loans, both domestically and abroad. Ordinary banks may also engage in certain government securities activities, including some securities underwriting, and the sale of corporate commercial paper (short-term unsecured funds) to institutional investors and financial institutions. Collectively, city banks are the largest private banks in Japan, whether measured by industry assets, loans, or deposits. They are also among the largest banks in the world. In December 1995, the six largest banks in the world, ranked by assets, were Japanese city banks. Long-term financial institutions in Japan include long-term credit banksand trust banks. Historically, the Japanese government has established long-term financial institutions to provide long-term funds for agriculture and other industries. Until recently, they have been the only institutions permitted to raise long-term funds. Long-term credit banks may issue bank debentures with up to 5-year maturities, and trust banks may handle 5-year trust accounts. Since deregulation of the banking industry, however, ordinary banks are also making longer-term loans, and the historic differences between ordinary and long-term financial institutions have become less pronounced. The group of cooperative-based institutions known as financial institutions for small businesses serves the financial needs of their members, which include small- and medium-sized businesses and labor unions. Also included in this group are three central bodies serving the financial needs of their member cooperatives through such services as deposits and member loans, and a special corporation providing financial assistance for cooperative institutions. Institutions known as financial institutions for agriculture, forestry, and fisheries are made up of entities operating at three levels that serve local cooperatives. On the first level are cooperatives operating at the individual village, town, and city levels of government. These cooperatives in turn are members of a second level of prefectural-level credit federations serving clients within their prefectures. At the third level is the Norinchukin Bank, which in several respects works as the central bank for agriculture, forestry, and fisheries. Japan has 11 wholly owned government financial institutions, of which 2 are banks and 9 are public corporations. These lending institutions, which are designed to supplement private-sector financing, are prohibited from competing with private banks. The institutions’ funds come from loans from the government’s Trust Fund Bureau, which in turn is largely financed by the government’s Postal Savings System. Although the Postal Savings System is not categorized as a bank, the magnitude of its financial resources gives it important financial significance in Japan. As of June 1995, the system held in excess of 200 trillion yen ($1.88 trillion) in deposits, making it the largest financial institution in the world. The extensive system operates out of 24,000 Japanese post offices throughout the country. As of early 1995, the 1,130 financial institutions conducting banking operations in Japan had approximately 1,148 trillion yen ($10.8 trillion) in industry assets, as shown in table 1.1. Ordinary banks alone accounted for over half, or 53 percent, of this total. Ten long-term financial institutions held the next largest share of banking assets, or 28 percent of the total. The most sizable share of banking assets controlled by cooperative-based institutions was held by financial institutions servicing primarily local communities, which had about 12 percent of total industry assets. As of February 1996, there were 90 foreign-owned bank branches in Japan, of which 16 were owned by U.S. firms, according to Japanese government officials. Appendix I provides greater detail on financial institutions in Japan. Historically, Japanese laws for bank regulation and supervision have been simple and limited in scope. The current Japanese bank regulatory and supervisory structure is based on the 1981 Banking Law, which revised earlier banking laws. The 1981 Banking Law designated the Ministry of Finance (MOF) as solely responsible for authorizing and regulating the banking industry in Japan, and it maintained MOF’s legal supervisory authority over banks. While the Bank of Japan (BOJ) lacks the regulatory authority of MOF, it carries out its safety and soundness responsibilities based on the authority granted by the 1942 Bank of Japan Law that BOJ maintain a safe and sound financial system. Changes in the bank regulatory structure have resulted from essentially two stages of evolution in the Japanese banking industry, according to historical literature. The first stage spans the 1860s up to the early 1970s and includes the origin of the banking system as well as regulatory changes following World War II. The second stage, which dates from the mid-1970s to the present, was triggered by the first oil crisis in 1973. The origins of the Japanese banking system can be traced to the Meiji restoration period in 1869, when money-transfer companies with many of the functions of modern banks were established in major cities. Soon after, Japan’s first bank legislation (the National Bank Act) was enacted in 1872. This act created national banks, which were private banks issuing bank notes. The central bank of Japan, BOJ, was founded in 1882, although it was later reorganized under the Bank of Japan Law of 1942. The original 1882 law gave BOJ the sole right to issue bank notes, taking away this responsibility from national banks, most of which disappeared soon after. The Banking Act of 1890 converted the remaining national banks and other private banks into ordinary banks. In the same year, the Savings Bank Act of 1890 established savings banks, whose number then climbed steeply over the next decade to a peak of about 720 banks at the turn of the century. The 1915 amendment to the Savings Bank Act prohibited ordinary banks from engaging in similar savings activities until World War II, when the expansion of savings became a national policy goal. At that point, ordinary banks were allowed to conduct the same business activities as savings banks and, as a result, savings banks began to disappear. In response to a nationwide financial panic in 1927, which heightened concerns about the stability of the Japanese banking industry, the government enacted the Banking Law of 1927. This law, which defined the structure and organization of Japan’s banking system for the following 54 years, established a banking system focused on short-term lending. Fundamental changes to BOJ’s governance structure enacted in the 1942 Bank of Japan Law also made BOJ a means for conducting monetary policy. Prior to 1942, BOJ was a stock corporation that was directly accountable to its stockholders. However in light of wartime conditions, the 1942 law gave the government influence over the bank’s operations. The government became the majority stockholder of the bank, while voting rights were denied to all stockholders. The 1942 law, according to BOJ officials, provided BOJ a legal basis to foster and maintain a sound financial system. In particular, BOJ believes that the law provided a stronger statutory basis for conducting its safety and soundness examinations of client banks, which began in 1928. In addition to on-site examinations, BOJ relies on frequent contacts to ensure that financial institutions follow sound practices. BOJ carries out such examinations under the terms of contractual agreements made with all banks that have current accounts with BOJ. The roles and responsibilities of BOJ are currently under review by an ad hoc advisory committee to the Prime Minister with the aim of possibly making changes to recognize the changing economic and financial environment. Industrialization policies adopted in the mid-1940s to support the reconstruction of the postwar economy led Japan to develop a financial system characterized by a high degree of specialization. Pursuit of such broad goals as financial order and stable institutional earnings led the country to enact restrictions aimed at compartmentalizing banking activities. The national goal of protecting and strengthening Japanese securities companies, for example, led to the adoption of restrictions similar to those provided for in the Glass-Steagall Act, which separates the U.S. banking and securities industries. During the period from post-World War II to the 1970s, the “main bank system” (defined as a unique business relationship between banks and companies) played a key financial role in Japan’s economic expansion. Under the main bank system, banks and companies were closely tied to each other through practices such as cross shareholdings and exchanging senior management personnel (usually from main banks to companies). As a result, companies enjoyed stable funding regardless of their health, while main banks maintained solid market share by supplying loans to companies. However, in subsequent years, factors such as an increase in the funding needs of companies due to economic expansion, the diversification of company funding sources due to financial liberalization, and the development of risk management based on portfolio diversification have diluted the relationship between banks and companies under the main bank system. Japan was shaken from a period of stable economic growth by the first global oil crisis in 1973. The crisis, which initially disrupted the banking industry along with Japan’s other economic sectors, eventually prompted the evolution of a more flexible, open, and international system. In turn, these changes have helped Japan emerge as a major global financial presence in the years since. The shock to Japan’s economic growth brought about by skyrocketing oil prices led, within a relatively brief period, to a doubling of Japan’s public sector debt. To fund the public debt, the government issued an increasingly large volume of government bonds. In 1979, 6 years after the 1973 oil crisis, the government issued bonds worth a record total of 15.3 trillion yen ($70.5 billion). However, as the deficit increased, financial institutions became less willing to help the government absorb the debt at above-market prices. Businesses also became less dependent on bank credit and services, such as bank loans. As a result of these developments, banks began to seek out new markets outside the traditional financial marketplace. To expand their market share and increase their competitiveness, banks and securities companies became advocates of financial liberalization, and banks began to diversify their loans and funding. The resulting liberalization, which began in the late 1970s, has continued over the course of succeeding decades and has primarily affected three areas: interest rates, scope of business, and foreign exchange controls. The relaxation of restrictions on interest rates began in 1979, with the introduction of negotiable certificates-of-deposit, followed soon after by the emergence of money market certificates of deposit paying interest rates linked to money market accounts. By October 1994, interest rates had been liberalized on all time deposits except for checking accounts. Over the same period, the relaxation of lending regulations had enabled banks to increasingly set their short-term prime rates relative to the official discount rate. The enactment of the 1980 Foreign Exchange Law eased the regulation of banks’ foreign exchange activities, except during times of crisis. The 1986 opening of the Tokyo offshore market further liberalized Japanese banks’ foreign exchange activities. The primary law governing bank licensing, regulation, and supervision in Japan today is the 1981 Banking Law. The complete revision of the past law—the 1927 Banking Law—was prompted by the economic and financial changes that took place in Japan after the first oil crisis. The 1981 law was designed to maintain financial order and promote economic development by ensuring sound and appropriate bank management, depositor protection, and facilitation of financial transactions. The law designated MOF as the governmental body responsible for authorizing and regulating banks. The Banking Law of 1981, which totally revised Japanese banking law, provided banks with greater guidance in the conduct of banking business. The law reorganized the basic supervisory framework for Japanese banks without making major changes to MOF authority or responsibilities. In particular, the law provided more guidance on the conduct of banking business than was provided in the 1927 banking law, which up until that time had delineated the basic requirements for Japanese banks. Specific areas covered by the 1981 law include general requirements, such as banking licenses and capital requirements; permissible banking business; required reports; MOF supervision; MOF enforcement and penalty provisions; merger and transfer or acquisition of business; termination of business; and licenses for foreign bank branches. The 1992 Financial System Reform Law was meant to be a comprehensive reform of Japan’s financial and securities transaction systems corresponding to domestic and international developments. The law, which was enacted to expand the scope of permissible business activities, eliminated many differences among financial institutions, allowing them to compete in one another’s sectors through subsidiaries, albeit with restrictions and firewalls. In particular, it allowed Japanese banks to conduct securities business through subsidiaries in which they have at least a 50-percent share. The law also provided MOF with the authority to establish standards to safeguard the soundness of banks and controls over transactions between banks and their subsidiaries. During 1995 and early 1996, Japanese banks and the banking system were confronted with several events that encouraged authorities to enhance the ability of the regulatory and supervisory process to deal with industry problems. These events included (1) a high number of nonperforming loans, (2) near depletion of the deposit insurance fund, and (3) large losses suffered by a major Japanese bank—the Daiwa Bank—due to improper trading by an employee. The nonperforming loan problem originated in the economic boom of the late 1980s, when Japanese banks substantially increased their real-estate-related lending. After years of rapid appreciation, banks experienced rapid depreciation of asset prices. The value of nonperforming loans held by Japanese financial institutions as of March 1996, according to MOF, was 34.8 trillion yen ($326 billion), a condition considered unacceptable by the Japanese government. Nonperforming loans, which have caused several credit cooperatives and regional banks to fail, have also called into question the financial soundness of other financial institutions. As a result, Japanese officials recently undertook an analysis of the nonperforming loan problem, which has led to changes in the supervisory process. The Japanese government’s attention has also been directed toward devising supervisory responses to the problems of one particular type of institution, housing loan companies—called jusen—which have experienced heavy losses. Japan’s eight jusen, which were established in the 1970s by Japanese banks and other financial institutions such as insurance companies and securities firms, have been especially hard hit in recent years with the steep decline of the Japanese real estate market. Although their original intended function was to supplement home mortgage lending, jusen became heavily involved in commercial real estate and housing development lending, which contributed to their losses when the Japanese real estate market declined sharply in early 1992. As of March 1996, nonrecoverable problem loans of jusen were estimated at 6.3 trillion yen ($59 billion). In Japan, there was widespread concern that the failure of one or more jusen could spark public panic and lead to a chain reaction of withdrawals from other financial institutions, since many financial institutions had provided financing to jusen companies. To avert such a crisis, the Japanese government designed a plan aimed at rebuilding public confidence and protecting depositors, including establishing a jusen account in the Deposit Insurance Corporation (DIC) with a governmental contribution of 680 billion yen ($6.4 billion). In the summer of 1995, Daiwa reported that a securities trader in its New York office had initiated improper trades over an 11-year period that had gone undetected. Reported losses totaled more than $1 billion. In October 1995, BOJ conducted a special on-site examination of Daiwa Bank’s—a major city bank—New York Branch to ascertain the facts at its New York Branch, as well as to evaluate Daiwa’s overall risk management system. Also in October, banking regulators in the United States issued cease and desist orders against Daiwa requiring a virtual cessation of trading activities in the United States. In November 1995, MOF identified and took action intended to correct inappropriate management practices at Daiwa Bank. MOF also ordered Daiwa Bank to reduce its international operations, including the amount of loans outstanding, the amount of securities holdings, and market-related activities. MOF and BOJ also committed themselves to strengthening their oversight of overseas branches and offices of Japanese banks. In the last 2 years alone, DIC has provided financial assistance totaling 643.3 billion yen ($6 billion) to assist in the resolution of troubled credit cooperatives and regional banks, which has come close to depleting the deposit insurance fund. At the time of our visit in September 1995, a DIC senior official told us that the insurance fund could be depleted if current resolution plans were implemented to handle the remaining failing financial institutions. In June 1996, the Diet—the Japanese Parliament—passed three financial bills to facilitate the resolution of failed or failing institutions and to increase deposit insurance premiums. Bank licensing and regulation is the responsibility of MOF. However, both MOF and BOJ have responsibilities for ensuring the safety and soundness of the banking system. The two agencies’ responsibilities do not typically extend to credit cooperatives, which are generally supervised at the local government level. MOF, the government’s central agency with jurisdiction over the banking industry, is responsible for bank licensing, regulatory compliance, guidance, and supervision. Originally created in 1869, its legal authority to supervise banks was first granted in 1890, and again defined in the 1949 Ministry of Finance Establishment Law, which was enacted during a major government reorganization after World War II. The statute used by MOF to carry out its current responsibilities is the 1981 Banking Law. Bank supervision is just one of MOF’s broad responsibilities. Among other things, MOF is also responsible for overall administration of the government’s fiscal and related monetary functions, including budget formulation and execution, and tax assessment and collection. The formulation, execution, and coordination of the national budget allows MOF to play a pivotal role within the national government. This currently includes approving BOJ’s budget. MOF is headed by the Minister of Finance, a cabinet member appointed by the Prime Minister. The ministry is 1 of 12 ministries reporting to the Prime Minister. MOF’s organizational structure consists of one secretariat and seven bureaus. The Banking Bureau is the main bureau responsible for regulatory guidance and supervision of banks, but it shares these responsibilities with MOF’s Secretariat and the International Finance Bureau. Generally speaking, domestic banking issues are under the auspices of the Banking Bureau, and international banking issues are under the International Finance Bureau. The Banking Bureau consists of five divisions and one department. Three divisions—the Commercial Banks Division, the Special Banks Division, and the Small Banks Division—share responsibilities for providing supervision and regulatory guidance to banks. As of September 1995, according to MOF, the Banking Bureau had a staff of 130. The Banking Bureau also works with MOF Securities Bureau in supervising bank securities activities. The Securities Bureau provides guidance and supervision to a broad range of participants in the securities market, including financial institutions engaged in securities business. As of September 1995, according to MOF, the Securities Bureau had a staff of 90. The International Finance Bureau oversees the foreign activities of Japanese financial institutions. It also handles international finance-related affairs, including those involving the international currency system, the yen’s internationalization, balance of payments, and foreign exchange control; and it coordinates activities with its foreign counterparts. As of September 1995, according to MOF, the International Finance Bureau had a staff of 114. Prior to 1992, bank inspections were conducted separately by the individual bureaus. Since then, the MOF Secretariat’s Financial Inspection Department has been responsible for conducting all inspections. As of September 1995, according to MOF, the Financial Inspection Department had a staff of 112, of which 80 to 90 were assigned to inspection teams. An additional 307 inspectors work in local branch offices, primarily inspecting shinkin banks. However, when needed, they conduct joint inspections of regional banks with inspectors of the Financial Inspection Department. In fiscal year 1996, there is to be an increase of 20 inspectors in the Financial Inspection Department and an increase of 46 inspectors in local branch offices, according to MOF officials. To strengthen oversight of the securities market, MOF established the Securities and Exchange Surveillance Commission (SESC) as a separate agency in July 1992. SESC is authorized to inspect securities companies, conduct surveillance of market transactions, investigate suspected criminal offenses, and propose policy changes to MOF. If illegal activities are discovered, SESC may recommend disciplinary actions to MOF. SESC has the authority to obtain a court warrant, and it can bring charges against a suspect through the Public Prosecutor’s Office if it believes a crime has been committed. SESC has a chairman and two commissioners that MOF appoints with consent of the Diet—the Japanese Parliament. They have equal power and serve 3-year terms. SESC has an Executive Bureau consisting of 2 divisions and 11 regional offices, with a staff of 206 employees as of February 1996. BOJ first started examining banks in 1928, following financial crises caused by the recession after World War I and the Kanto Earthquake of 1923. All institutions having current accounts with BOJ are subject to its examinations in accordance with contractual agreements with BOJ. They include city banks, regional banks, trust banks, long-term credit banks, most shinkin banks, overseas branches and affiliates, branches of foreign-owned banks, and some securities companies. BOJ has two principal missions: (1) stabilizing the value of money and (2) fostering a safe and sound credit and finance system. To keep the currency stable, BOJ: influences the money supply and money markets; implements monetary policy and controls credit by setting the official discount rate, directly selling and buying securities and bills in the financial markets, and imposing the reserve deposit requirement; and intervenes—as the agent of the Finance Minister—in the foreign exchange market to stabilize the yen’s value against foreign currencies. To foster a safe and sound financial system, BOJ: facilitates payments and settlements by issuing bank notes and providing funds transfer services among bank accounts; monitors financial institutions and markets through regular contacts, on-site examinations, and the provision of advice; and acts as lender of last resort. Legally, BOJ is a special corporation in a unique category. While BOJ’s budget is currently approved by MOF, BOJ is considered to be neither a government entity, nor a private institution within the structure of the Japanese financial system. Although it coordinates some activities with MOF, BOJ functions as an independent organization separate from MOF, according to MOF officials. In March 1996, BOJ, whose assets totaled 57.7 trillion yen ($541 billion), had responsibilities for 700 financial institutions, as shown in table 1.2. BOJ is headed by its Governor. The Governor is appointed by the Cabinet for a term of 5-years and may be reappointed. Historically, BOJ governors have alternated between individuals with MOF or BOJ backgrounds. The Governor is the link between the bank’s executive board and BOJ’s Policy Board. The Policy Board, which is BOJ’s highest decisionmaking body, is the sole decisionmaking body for monetary policy, including decisions on the official discount rate. The Policy Board was established in 1949 by amendments to the 1942 Bank of Japan Law. The amendments were in response to a desire to modernize the Japanese monetary and economic system and to enhance BOJ’s independence. Board members include BOJ’s Governor and representatives from MOF, the Economic Planning Agency, and four individuals with experience in and knowledge of banking, commerce, manufacturing, or agriculture. Government representatives from MOF and the Economic Planning Agency are nonvoting members. The four “knowledgeable and experienced” members, who are appointed by the cabinet with approval from the Diet, serve renewable 4-year terms without restrictions. BOJ has 13 departments, a Secretariat of the Policy Board, the Governor’s office, and an Institute for Monetary and Economic Studies. In addition to its 33 branches and 12 local offices in Japan, BOJ has overseas offices in New York; Washington, D.C.; London; Paris; Frankfurt; and Hong Kong. Bank monitoring is handled by the Bank Supervision, Financial and Payment System, and Credit and Market Management departments, according to BOJ. Within BOJ, the Bank Supervision Department is primarily responsible for monitoring financial institutions. Headed by a director, it is divided into two divisions: the Bank Supervision Division and the Data Analysis Division. The former division manages on-site examinations of banks and securities companies through four examination groups. The latter division compiles and analyzes various statistics regarding financial institutions. As of October 1995, according to BOJ, the Bank Supervision Department had an examination staff of between 100 and 120. BOJ’s Financial and Payment System Department’s role is to maintain and foster a safe and sound credit system. It sets out the basic macro-prudential policies including working out the disposition of failed banks. BOJ’s Credit and Market Management Department oversees the activities of domestic and overseas financial institutions. It also monitors money and capital markets, administers BOJ’s money operations, and conducts off-site monitoring of financial institutions’ activities in such broad areas as day-to-day cash positions and long-term management strategy. Each of the 47 prefectural governments authorizes and supervises credit cooperatives in its own prefecture. However, credit cooperatives must obtain MOF’s authorization if their activities go beyond the prefecture’s geographical boundaries. Although MOF and BOJ do not have responsibility for supervising credit cooperatives, such institutions are required to be insured by the deposit insurance system. If a request is received from the prefectural governor, MOF may inspect a credit cooperative. The recent failure of several credit cooperatives has prompted the government to consider adopting measures to ensure close cooperation between national and local supervisory authorities. Measures under consideration are intended to provide local authorities with timely guidance, clarify conditions warranting MOF inspections, establish regular meetings, and provide for joint inspections by MOF staff and local authorities. As of April 1995, according to MOF, Japan’s prefectural governments had a supervisory and inspection staff of 338, of which 264 were inspectors. According to Japanese banking industry representatives, prefectural inspections are conducted by an insufficient number of inspectors, who must also carry out various other noninspection duties. At the request of Congressman Charles E. Schumer, we examined various aspects of the bank regulatory and supervisory structure of a number of countries. Specifically, our objectives were to describe how (1) Japanese bank regulation and supervision is organized; (2) Japan’s banking oversight structure functions, particularly with respect to bank licensing, regulation, and supervision; (3) banks are monitored by their supervisors; and (4) participants handle other financial system responsibilities. This report focuses more attention on describing the legal structure within which Japanese banking oversight has been conducted and less attention on the methods used to carry out that oversight. To address these objectives, we interviewed senior officials from MOF and BOJ, both in Japan and in the United States. They provided us with documents and information, including annual reports, tables of statistics, translations and analysis of selected banking legislation, organizational summaries and charts, reports on the Japanese banking structure, lists of reports banks must submit, and other documents to illustrate the current regulatory and supervisory environment. In addition to those interviews, we met with senior representatives of Japan’s DIC; the Federation of Bankers Associations of Japan (Zenginkyo); the Japanese Institute of Certified Public Accountants (JICPA); senior executives at six Japanese banks representing a cross-section of Japan’s specialized financial structure; senior executives from a public accounting firm; experts on the Japanese banking structure; and U.S. agencies with regulatory responsibilities over foreign banks: Department of the Treasury, the Federal Reserve, and the Office of the Comptroller of the Currency. Finally, we relied on translations of the 1981 Banking Law, the law that relates most directly to bank regulation and supervision in Japan, and the 1942 Bank of Japan Law, which gave Japan’s central bank its oversight authority. We also relied on translated summaries of three bills passed in June 1996 by the Diet, which significantly changed Japan’s regulatory process and the disposition of failed and failing institutions. This report does not include an evaluation of the efficiency or effectiveness of the Japanese bank regulatory structure. We conducted our review, which included one visit to Japan, from June 1995 through July 1996 in accordance with generally accepted government auditing standards. We gave senior officials and executives of MOF, BOJ, DIC, JICPA, the Federation of Bankers Associations of Japan, and the three city banks we visited a draft of this report for their comments. They provided comments that were incorporated in the report where appropriate. MOF, as supervisor and regulator, licenses banks, regulates most aspects of Japan’s banking operations, and monitors any developments in bank operations that may adversely affect the banking system, in accordance with the 1981 Banking Law. In its role as Japan’s central bank, BOJ is to ensure the safety and soundness of the financial system through its oversight of financial institutions. Changes to oversight are being proposed due to the mounting levels of nonperforming loans. These changes are intended to make the supervisory system more transparent and increase the accountability of individual banks. The 1981 Banking Law requires each bank to obtain a license from MOF. The law defines “banking” as a business that accepts deposits and makes loans or conducts exchange transactions. However, a license is also required of institutions that accept deposits or installment savings regardless of whether they lend money or discount bills at the same time. In reaching licensing decisions, according to the 1981 law, MOF is to consider the applicant’s: financial capability to conduct banking soundly, and efficiently, and the potential income and expenses of its planned business operations; competence and experience to conduct banking appropriately, fairly and efficiently; and its credibility; and reasons for entering the banking business and anticipated effects on the existing financial system (e.g., supply and demand of funds, the operations of existing banks and other financial institutions, and the local economy). After applying the above criteria, MOF may impose conditions on a license to the extent it believes the public interest could be affected. Banks must obtain permission from MOF to establish a head office, branch, or subbranch and to relocate, change the status of, or close any such offices. However, Japanese banks are free of geographical restrictions on where their branches can be located. Foreign-owned banks wishing to establish a branch or agency in Japan are required to obtain a license from MOF. Separate licenses are required for each branch. Concurrently, according to BOJ officials, BOJ determines whether to allow the bank to open an account with BOJ. For the fiscal year ending March 31, 1995, MOF reviewed four license applications for foreign-owned bank branches and approved all four. These banks also established accounts with BOJ. In addition to complying with Japanese laws, branches of foreign-owned banks in Japan must conduct their banking business in accordance with the banking laws of their home country. They are to be supervised on a consolidated basis by the home country’s authorities who have primary responsibility for the operation of the parent bank, according to MOF. Both BOJ and MOF examine or inspect branches of foreign-owned banks in Japan. A licensed bank is also required to be incorporated and properly capitalized. The 1981 Banking Law established a minimum capitalization level for banks established in Japan of at least 1 billion yen ($9.4 million). This threshold has since been raised to 2 billion yen ($18.8 million). According to MOF officials, banks are subject to two target capital ratio standards. Domestic banks with no overseas establishments are subject to a minimum 4-percent capital adequacy ratio. Banks maintaining overseas branches or subsidiaries are subject to an international minimum risk-based capital standard of 8 percent agreed to by the Basle Committee. MOF does not apply these standards to Japanese branches of foreign-owned banks in Japan because they are to be supervised on a consolidated basis by their home countries. MOF has broad responsibilities for formulating and carrying out policies relating to banks. Banking, securities, and other laws establish MOF as the primary, if not sole, authority with responsibility for financial regulation in Japan. Under these laws, the agency has responsibility for regulating most aspects of Japan’s banking operations, including sources and uses of funds, terms on which banks can borrow and lend, activities in which they may engage, branching and merger activities, and investment decisions regarding other companies’ stockholdings. In Japan, legislative proposals are generally drafted by individual government ministries and are submitted through the cabinet to the Diet. Japanese laws typically give government ministries considerable latitude in their interpretation and implementation. Laws are supplemented by two types of governmental ordinances: cabinet and ministerial. Unlike laws, ordinances do not need to be passed by the Diet, so they are used for adjustments required in response to social changes. For example, a bank’s minimum capitalization requirement is set by cabinet ordinance and its business hours are set by ministerial ordinance. Bank activities are also subject to circulars and administrative notices issued by MOF. Circulars are used to explain laws and ordinances and to give guidance on their practical application. For example, MOF circulars established standards for judging institutional soundness, such as liquidity ratios. Similar circulars established uniform standards for bank accounting and reports. MOF typically sets policy by consensus—a process that involves the input of many parties, such as other governmental agencies, industry groups, academic groups, and BOJ. In Japan, offices, ministries, and government agencies may establish councils, or government advisory bodies, for the purpose of studying and discussing important issues or to provide administrative reviews. These councils are often responsible for initial discussions of major regulatory issues which eventually result in ministerial ordinances or legislation. The Financial System Research Council (FSRC), a senior-level consultive body to the Minister of Finance, is one such advisory body. The current FSRC, which is mandated by law, was originally created in 1956 to study the monetary system and make recommendations to the Minister. It now has 17 members who are chosen from a broad range of experiences and expertise, including the financial, industrial, and academic community to provide a broad-based forum on policy issues and to conduct studies of the Japanese financial system. The Banking Bureau serves as FSRC’s Secretariat, and it provides FSRC with both information and resources, according to MOF officials. Over the years, FSRC has served as a forum for discussing and analyzing proposed changes in Japan’s banking legislation. Its findings provided a basis for the 1981 Banking Law. In 1992, changes it recommended for Japan’s compartmentalized financial system similarly became the basis for the Financial System Reform Law. More recently, in a report issued to the Minister of Finance in December 1995, FSRC proposed ways to restore Japan’s nearly depleted deposit insurance fund, promptly dispose of nonperforming loans, ensure sound management of financial institutions, and dispose of failing financial institutions. In addition to working with FSRC, banks also influence changes in policy through their bankers’ associations. Japanese banks nationwide are organized into regional associations whose primary function is to operate a clearinghouse to clear checks and bills for participating institutions. For example, the Tokyo Bankers Association operates the Zengin data telecommunication system, which is a domestic funds transfer system operated on a national scale. The associations also play an important role in communicating the industry’s views to governmental agencies. Another group with a key role in communicating the banking industry’s views to the government is the Federation of Bankers Associations of Japan, or Zenginkyo. The Zenginkyo is a consortium of regional bankers associations that acts as a representative for banks throughout Japan. Because the Zenginkyo represents a broad constituency, it attempts to reflect the views of its broad membership, not the particular interests of individual subgroups. This broad constituency has led such subgroups as city banks to turn to other types of affiliations to further their specific interests. In its role as Japan’s bankers’ bank, BOJ maintains current accounts for its client institutions. Funds held in current accounts are used to clear accounts, make remittances among districts, and settle other transactions among financial institutions. BOJ also discounts bills, a form of credit provision. In addition, it buys from or sells to current account holders various bills and bonds, including long-term government bonds, and government short-term bills. In ensuring the safety and soundness of the financial system, both MOF and BOJ monitor banks through on-site monitoring, reviewing financial reports, and frequent contacts. When corrective action is necessary, MOF and BOJ typically rely on guidance or advice, a form of moral suasion, as their main means of enforcement. MOF provides supervisory direction and guidance by issuing administrative guidelines and notifications, which function as important components of Japan’s banking regulatory system. MOF designates its on-site monitoring of banks as inspections based on its statutory authority, while BOJ calls its on-site monitoring examinations and conducts them under its contractual agreements with client banks. Despite the different terminology, actual on-site monitoring activities are somewhat similar, although their monitoring objectives are different. In conducting its supervisory responsibilities, MOF is required to ensure that each bank subject to MOF’s supervision operates within limits set by both Japanese banking legislation and the bank’s own internal policies, and monitor any developments in bank operations that may have an adverse effect on the integrity of the bank involved or the banking system as a whole. BOJ also requires banks to undergo periodic on-site examinations and to submit necessary information upon request, which allow BOJ to obtain an understanding of the bank’s operations to fulfill its responsibility to maintain and foster the safety and soundness of the financial system. BOJ carries out its bank oversight primarily through its Bank Supervision Department. Both MOF and BOJ may inspect or examine banks at any time and with any frequency, although each typically examines the average bank once every 2 to 3 years. Officials from the agencies told us that they coordinate their on-site monitoring with each other so that banks are generally examined annually by either MOF or BOJ. For the fiscal year ending March 31, 1995, according to MOF, a total of 485 Japanese banks were inspected or examined by either MOF or BOJ, as shown in table 2.1. Actions that MOF and BOJ take against banks subject to their jurisdiction are typically through guidance or advice. MOF relies on administrative guidance to influence actions taken by banks it supervises. BOJ lacks such authority because it is not a regulatory authority. Instead, BOJ provides advice, which according to government and bank officials banks generally follow. Under the 1981 Banking Law, MOF can suspend a bank’s business activities or revoke its license if the bank violates a law, its article of incorporation, MOF’s enforcement actions, or if its activities undermine the public interest. The Banking Law also provides penalties for law violations, including fines and imprisonment. For example, individuals can be liable for imprisonment or fines of up to 3 million yen ($28,140) if they conduct banking activities without obtaining a license from MOF. Individuals are also liable for fines of up to 500,000 yen ($4,690) if they do not meet reporting requirements, or if they refuse, obstruct, or circumvent an examination. However according to MOF officials, there have been no cases in which MOF has used fines or imprisonment. According to bank officials, MOF has the authority to correct the operations of a troubled financial institution. MOF can also remove bank managers from their positions and order the restructuring of a bank’s management or suspension of its business if violations of laws and regulations are found. Although MOF has such legal enforcement authority, until recently it has not been used. Instead, banking industry representatives said, MOF prefers to rely on administrative guidance as its primary means of enforcement. MOF also provides supervisory direction and guidance through the frequent contacts it has with banks. MOF’s administrative guidance basically involves the agency interpreting existing laws and regulations and providing these interpretations to banks. This guidance can take the form of oral guidance or written circulars or notices. MOF sees this flexibility as one advantage of administrative guidance. MOF officials also described such forms of guidance as preferable to initiating legal proceedings in the Japanese court system, which can be a lengthy process. Although administrative guidance is not legally enforceable, government officials and bankers said that banks are expected to act on it and they typically do. According to MOF, banks are allowed to interpret administrative guidelines within reason. When conflicts arise, differences are resolved through discussions with bank officials and MOF. Circulars or notices may also be used to clarify or explain terms and concepts. The Japanese government recently adopted new legislation to ensure that governmental administrative actions are more transparent. In October 1994, the Administrative Procedures Law was passed to establish standards for fairness and openness in the administrative process. The law, among other things, requires a clear explanation of administrative decisions, and it requires that guidelines for regulated institutions, which include banks, be standardized. It also confirms that compliance with administrative guidance is strictly voluntary. Under new provisions of this law, MOF is to issue administrative guidance in writing, if required by the affected party. According to MOF, the number of cases in which it gave, what it termed concrete guidance on business-improvement measures to major banks totaled 178 in 1992, 129 in 1993, and 127 in 1994. In order to fulfill its mission of maintaining price stability and fostering a safe and sound financial system, BOJ said it extends safety and soundness advice to solve the prudential problems of each examined bank, if necessary. In this regard, BOJ’s advice is different from law-based action taken by governmental agencies, such as MOF. BOJ’s authority comes from its contractual agreements with client banks. Advice to banks may cover such areas as operational safety and soundness and risk concentration. The high number of nonperforming loans and the near depletion of the deposit insurance fund in 1995 led Japanese officials to conclude that changes were needed in the supervisory process. A committee of FSRC, in September 1995, after almost 3 months of deliberation, proposed a number of supervisory changes. Interim and final reports by FSRC observed that supervisory authorities should have responded to the loan problem by constructing a financial system in which market mechanisms and the principle of self-responsibility of both banks and depositors would come fully into play. Specifically, the reports proposed strengthening supervisory oversight and suggested that supervisory authorities: take action in a timely manner, inspect financial institutions more frequently, increase the number and quality of inspection and monitoring staff, introduce tools to promote the prompt correction of financial institutions’ mismanagement, promote more disclosure of nonperforming loans, and implement a prompt disposal procedure for failing or failed financial institutions. In addition, in late December 1995, MOF announced plans intended to reform Japan’s bank supervisory system. MOF was to (1) issue new guidance for banks regarding internal controls and risk management, (2) increase its staff of bank inspectors from 420 to 490, (3) make use of external audits and encourage external audits in overseas branches, and (4) promote a closer exchange of information with other supervisory authorities abroad. Under the new supervisory system, banks will be encouraged to improve their own internal control and risk management systems in accordance with new MOF guidelines, make greater use of certified public accountants (CPA) to conduct external provide timely notification of wrongdoing, and ensure their business operations comply with requirements through check and balance functions. Collectively, these measures are intended to make the supervisory system more transparent and increase the accountability of individual banks. Some supervisory reform measures are already under way, including the passage of three reform bills in June 1996. In addition to increasing the frequency and scope of inspections of overseas branches and subsidiaries in Asia, MOF has issued an inspection checklist on overseas offices. BOJ has initiated special examinations of the New York branches of some major Japanese banks. In addition, BOJ is expanding the scope of its examinations of overseas branches to (1) enhance examinations, (2) upgrade examination skills, and (3) increase cooperation with other central banks. MOF and BOJ obtain information needed to fulfill their safety and soundness responsibilities primarily through on-site and off-site monitoring. MOF and BOJ also rely on required and ad hoc reports from banks, frequent meetings with banks, and their own research and analysis. The two agencies cooperate with each other, as necessary, in order to achieve their distinct missions. Neither agency has typically used audit information developed by external, statutory, or internal auditors. Although the scope of MOF’s and BOJ’s on-site bank monitoring, which MOF calls inspections and BOJ calls examinations, is similar, the two agencies’ actual on-site monitoring is carried out largely independently of one another. Recently, due to financial liberalization, both MOF and BOJ have placed greater emphasis on their on-site monitoring of risk management. Although there is no legal requirement governing the frequency of bank examinations, MOF and BOJ coordinate their monitoring efforts to ensure that most banks are monitored annually. This coordination allows MOF and BOJ to alternate their on-site monitoring of the approximately 700 bankssubject to their inspections or examinations. MOF conducts an on-site inspection of the banks it supervises about once every 2 to 3 years. The average duration of inspections and the size of inspection teams varies due to several factors, including the size of the bank and its operational record. Inspections of city banks take about 6 weeks and involve about 10 inspectors. For regional banks, inspections last 4 to 5 weeks and typically require five inspectors. Inspections of shinkin banks, which are conducted by one of MOF’s regional bureaus, take about 2 weeks and involve four to five inspectors. Inspection teams are led by a chief inspector, and team members are responsible for different components of the inspection. MOF conducts two types of on-site inspections—comprehensive inspections and inspections focusing on specific aspects of a bank’s operation, such as credit-risk or market-risk management. Comprehensive inspections, which are the most common type of inspection, are unannounced. Nonetheless, banking officials said the timing of past visits tends to indicate when they are likely to receive their next inspection. Comprehensive inspections assess all major elements of a bank’s activities, including regulatory compliance, assets and liabilities, profits and losses, general business operations, and such physical items as cash on hand. As part of the inspection process, MOF inspectors check a bank’s overall risk management policies and procedures. They also check the bank’s compliance with regulations related to financial soundness, such as those dealing with its minimum capital ratio requirements and large loan exposures to a single party. MOF inspectors also review the bank’s compliance with other regulations, for instance, those specific to the risk management of a particular business activity or product. Prior to conducting a comprehensive inspection, MOF inspectors review bank documents to help focus their on-site efforts. Following this review, they initiate the inspection, beginning with physical items, at one branch or simultaneously at several branches. Comprehensive inspections, which usually involve verification of records, typically involve inspectors inspecting cash, securities, notes, legal documents, deposits, and loans. At any time during the inspection, MOF inspectors can request additional information. Inspectors classify assets according to their likelihood of repayment. Such classifications, combined with an analysis of the bank’s capital, indicate how deposited money is used and the extent of credit risk present, according to MOF. Assets are classified into four categories: (1) unmarked—when the loan is considered sound, (2) substandard—when the loan carries above average risk, (3) doubtful—when full payment is considered doubtful and some loss is expected, and (4) loss—when the loan is considered unrecoverable. During on-site inspections, inspectors select loans to ensure a coverage ratio of at least 50 percent of a bank’s entire loan portfolio, according to MOF. Standards call for the selection of loans with large exposures over a certain amount, loans overdue beyond a certain period, and loans to companies having financial problems at the time of inspection. MOF inspectors also conduct financial analyses and interview bank management and key personnel to better understand bank policies and other matters. Since 1987, MOF has used a rating system similar to the U.S. CAMEL rating system, which bases ratings on five factors: capital, assets, management, earnings, and liquidity. In June 1996, MOF issued guidelines for banks’ risk management of market-related risks, which are based on guidelines established by the Basle Committee. BOJ examiners conduct on-site examinations of banks subject to the agency’s examinations about every 2 to 3 years, although the frequency of examinations can vary depending on bank size, business conditions, and MOF’s inspection schedule. BOJ examiners provide approximately 2 months advance notice of an on-site examination and typically request documents and other information in advance of their visit. Requested information commonly includes, for example, loan and deposit balances for each branch, data on client bankruptcies, and internal investment policies. BOJ examiners request additional information from banks with international operations on such matters as the condition of foreign real estate loans and earnings from their international banking activities. Since BOJ obtains bank information in advance of visits, its on-site examinations generally require less time than do MOF inspections, according to bank officials. Examinations of city banks typically take 3 to 4 weeks for one or two senior examiners, which BOJ calls chief supervisors, and ten examiners. In comparison, regional bank examinations take 2 to 3 weeks for one or two chief supervisors and four or five examiners. For shinkin banks, comparable examinations take 1 to 2 weeks for a chief supervisor and three examiners. As part of the examination process, BOJ examiners place their main emphasis on checking a bank’s overall risk management, including policies and procedures. Examiners use a check list for risk management developed in 1987 and later completely revised in 1996 to reflect the changing financial environment. They also check the bank’s compliance with MOF regulations related to financial soundness, such as those dealing with its minimum capital ratio requirements and large loan exposures to a single party. As for other regulations, such as those on business area or product, BOJ reviews them from a risk management viewpoint rather than from a compliance perspective. BOJ’s examination process has two key components: (1) a preexamination analysis and (2) fieldwork. The initial preexamination analysis is used to identify a bank’s primary activities and to focus on potential problem areas. As part of this analysis, examiners look at bank operations from a risk management perspective, including lending, funding, internal controls, and international activities. During the fieldwork component, which consists of the actual on-site examination, examiners meet with the bank’s senior executives to review policies and discuss problems. They assess asset quality by (1) evaluating individual loans, (2) holding discussions with loan officers, and (3) reviewing the credit files of borrowers and other related documents. BOJ officials told us that examiners typically evaluate about one-half of a bank’s total loans. BOJ selects bank loans for review by categorizing them into one of three categories: (1) insider loans, (2) marked loans, and (3) large loans. During the examination, loans are classified as to their quality using procedures similar to those used for MOF’s classification. As part of their fieldwork at a typical bank’s head office and selected branches, BOJ examiners review the bank’s daily operations for reliability. They review cash on hand, accounting books, and other financial documents. In addition, they assess the bank’s management of risk related to credit, interest rates, and foreign exchange. At the completion of this process, BOJ chief supervisors give an overall evaluation to the bank management regarding the bank’s condition, as well as provide recommendations to improve risk management. In addition to regular full-scope examinations, BOJ periodically conducts special examinations of particular aspects of bank operations. A recent example is the special examination of Daiwa Bank, which primarily involved investigating the case and ascertaining risk-management deficiencies in the bank’s New York branch trading operations. Another recent BOJ special examination focused on the use of operational controls and the management of market risk by the New York branches of leading Japanese banks. BOJ also conducts on-site examinations of securities firms that have current accounts with it. During these examinations, examiners check such indicators of overall financial conditions as the firm’s risk-management policies and procedures, asset quality, and earnings. Such examinations, which take 2 or 3 weeks, are usually conducted every 2 to 3 years by 1 or 2 chief examiners and 4 to 6 examiners. Securities subsidiaries of banks are often examined at the same time the parent bank is examined. When MOF and BOJ complete their inspection or examination, they meet with senior bank officials to discuss their findings and recommendations for improvement. Both agencies regard these individual discussions with bank management at the completion of their work as an essential method for communicating inspection or examination concerns. Typically, MOF’s chief inspector meets with the bank’s management to discuss findings at the conclusion of an inspection. This meeting is an opportunity both for the chief inspector to express his opinions informally and for the bank’s management to provide comments. Following this, an official inspection report is prepared and reviewed by senior MOF officials. MOF then issues an official conclusion in the form of a letter or an administrative order, which is given to the bank along with a copy of the inspection report. The conclusion, when appropriate, identifies areas needing improvement and provides guidance for the bank. MOF sometimes requests an improvement plan and periodic reports if the situation warrants such follow-up actions. At BOJ, periodic, interim, and closing meetings are attended by both examiners and senior bank management. According to BOJ officials, interim meetings are held to minimize later misunderstandings. At the closing meeting, BOJ examiners discuss examination results to highlight identified problems and to provide recommendations and supervisory guidance. A written report is subsequently shared with the bank’s senior management. MOF and BOJ independently conduct their own off-site monitoring, which typically involves analyses of information about banks under their jurisdictions. Information is obtained through periodic reports submitted by banks and frequent contacts with bank personnel and management. Reports submitted by Japanese banks play a key part in MOF’s and BOJ’s bank monitoring. Under the 1981 Banking Law, each bank in Japan is required to submit an interim banking report and an annual banking report to MOF describing its business activities and financial position. Interim and annual reports are also submitted to BOJ. Both MOF and BOJ may also require additional information as needed. Annual reports provide the most extensive information. They are to include certain detailed schedules on securities, loans, fixed assets, commitments and underlying capital, total amount of domestic and foreign drafts remitted and received, and total amount of foreign currency bought and sold. Interim reports, which are submitted on a semiannual basis, provide less extensive information on a bank’s activities and financial position. In addition, banks must report certain information to MOF on a more frequent basis that ranges from daily to quarterly. Information on a bank’s trading activities, for example, is typically provided to MOF monthly and quarterly, according to MOF officials. BOJ also requires each institution to file periodic financial reports. For the most part, MOF and BOJ do not require banks to file reports electronically. However, BOJ does gather computer-generated data from banks on a monthly basis. Currently, none of the information gathered from routine reports or daily monitoring is accumulated in an early warning system to identify banks that may be in trouble. However, MOF officials said MOF is developing a computerized system that is to accumulate information from banks, which would serve as an early warning system. MOF and BOJ officials told us they rely a great deal on frequent contacts with bank personnel and management during which useful information is exchanged. During informal meetings, which are held as needed, MOF and BOJ officials are able to provide guidance or advice while staying abreast of developments at individual banks. Meeting topics can include, but are not limited to, bank liquidity, overall business activities, new product development, and corrective actions. Although MOF and BOJ at times share information informally on a case-by-case basis, there is no legal or formal requirement for MOF or BOJ to share supervisory information with each other. In fact, MOF’s staff is bound by law to maintain confidentiality with respect to information gained in the course of their duties or by virtue of their position in the government. On the other hand, BOJ’s staff are not bound by law to maintain confidentiality with respect to information gained in the course of their duties. While MOF’s staff are restricted from sharing information regularly with BOJ’s staff, MOF may disclose information in those cases in which circumstances warrant such actions. As a result, on-site monitoring results ordinarily are not shared, unless problems arise requiring joint action by MOF and BOJ. BOJ officials explained that its examination results are considered proprietary, and that MOF respects this proviso. For serious problems requiring supervisory coordination, MOF typically assumes responsibility for coordination and exchanges of information, according to MOF and BOJ officials. In addition, MOF and BOJ officials told us they also communicate through daily telephone calls and informal meetings. Independent bank audits by CPAs have not historically played a major role in the supervision of Japanese banks, according to MOF and BOJ officials. MOF and BOJ have not typically used internal audits by statutory auditors. However, use of independent external audits by MOF appears likely to increase with the introduction of new legislation to improve oversight of the banking system. Japanese banks with capital stock totaling at least 500 million yen ($4.7 million), or with total liabilities of 20 billion yen ($188 million) or more, are required by Japan’s Commercial Code to undergo annual audits by an independent certified public accountant. Such audits must be undertaken prior to the bank’s annual shareholders’ meeting, which is typically held within 3 months of the end of the company’s financial year. Prior to World War II, independent or external audits were not required. However, corporations offering securities to the public became subject to mandatory annual audits by CPAs with passage of the Securities and Exchange Law in 1948. The new requirement grew out of the postwar demand for business reforms and corporate disclosures and in response to the introduction of foreign capital for postwar economic development. Subsequent amendments to the Commercial Code in 1974 and 1981 extended the auditing requirement to other types of corporations. Independent auditors are required to certify in their reports that the balance sheet and profit and loss statement fairly present the bank’s financial position and the results of its operations; proposed uses of retained earnings and accounting matters in the business report are presented in conformance with applicable laws and articles of incorporation; and accounting supplementary schedules present correct data and are in accordance with provisions of the Commercial Code. In addition to significantly enhancing the CPA’s role in the Japanese corporate system, the revised Commercial Code also required every bank to appoint statutory auditors. Statutory auditors are responsible for (1) auditing the bank’s accounting records and (2) monitoring the activities of its directors. Japanese Institute of Certified Public Accountants (JICPA) officials told us that statutory auditors rely on the results of the audits performed by CPAs on a bank’s accounting records. These audits and monitoring activities must be completed prior to the annual general meeting of the shareholders. Under the revised code, statutory auditors are considered “members” of the bank, but they cannot be employees or directors of the bank or its subsidiaries. According to accounting officials, statutory auditors, who receive salaries from the banks they audit, are often retired bank employees or former bank managers. The revised code does not require statutory auditors to have specific qualifications, and few are CPAs. Several independent auditors said the independence of statutory auditors is often compromised by their prior relationship with the bank being audited and their lack of auditing expertise. MOF and BOJ officials told us they do not rely on reports prepared by independent or statutory auditors. They said they depend instead on their own contacts with banks and their own monitoring activities. Our discussions with the Japanese Institute of Certified Public Accountants confirmed that CPAs have little contact with MOF or BOJ. As mentioned in chapter 2, legislative measures have been enacted that are designed to strengthen the supervisory oversight of banks. One provision requires increased use of external audits to ensure sound management of certain segments of the banking industry. BOJ and MOF have other financial system responsibilities in addition to their regulatory and/or safety and soundness responsibilities. BOJ is responsible for maintaining liquidity, serving as the lender of last resort, and providing funds transfer service. Both BOJ and MOF share responsibility for managing financial crises and for participating in international forums. A special corporation—the Deposit Insurance Corporation—administers the insurance system that protects deposits in Japanese banks. BOJ’s statutory responsibilities for monetary policy are based on the 1942 Bank of Japan Law. As the nation’s central bank, BOJ influences the nation’s money supply and interest rates to maintain adequate market liquidity, to help provide a basis for sustained economic growth. It also sets commercial bank reserve requirements and participates directly in financial markets by buying and selling securities and bills at market prices to influence the money supply and money markets and to ensure the smooth functioning of the financial system. As lender of last resort, BOJ can provide liquidity when an institution has severe difficulties in obtaining sufficient funds from the market, and such liquidity is needed. However, BOJ is expected to exercise discretion in deciding whether to extend loans to failing financial institutions. In an October 1994 statement, the Governor of BOJ stated that the central bank should only serve as lender of last resort for those cases in which an institution’s liquidity shortage could threaten the stability of the entire financial system. According to BOJ officials, in certain rare cases and with special approval, BOJ has provided liquidity without eligible collateral. According to BOJ officials, BOJ’s function as lender of last resort basically involves its providing liquidity to troubled financial institutions or to the financial system, to prevent a systemic crisis. They explained that the following four conditions should be met before it can carry out this function: There is a strong likelihood that systemic risk will materialize; Central bank financial support must be indispensable for the successful disposal of a failed financial institution; All parties responsible for the institution’s problems must be penalized so as to avoid the emergence of moral hazard; and The financial soundness of the central bank must be maintained. BOJ also plays a key role in clearing payments. The main payment system in Japan is the bill and check clearing and domestic funds transfer system, which is operated by private institutions. Local bankers associations operate the check clearinghouses and the Zengin data telecommunication system, which form the core of the domestic funds transfer system. BOJ cooperates with these institutions and plays a key role in the payments and settlements process by issuing bank notes and transferring funds among account holders. Banks can draw checks on BOJ or issue transfer instructions to it. In late 1988, BOJ launched a network for on-line settlements of payments called the Bank of Japan financial network system. The network, which links BOJ with hundreds of private financial institutions, provides an electronic infrastructure for operations, including funds transfer and government securities transfers. As of March 1996, BOJ data show 420 institutions had used the network for funds transfer, 266 had used it for yen settlements of foreign exchange transactions, and 432 had used it to transfer Japanese government securities. BOJ and MOF participate in the activities of numerous international organizations, including those of the Group of Seven, whose meetings they regularly attend. In addition, both attend Group of Ten meetings, such as the group’s governor’s meetings, which primarily focus on macroeconomic and monetary policy issues. BOJ also participates in such international organizations as the Bank for International Settlements and the International Monetary Fund. As a shareholder member, BOJ sits on all Bank for International Settlements institutional committees, according to a BOJ official. As Japan’s central bank, BOJ also cooperates and coordinates closely with other central banks on such issues as intervention in foreign exchange markets with the aim of achieving currency stability. MOF and BOJ also participate in activities of the Basle Committee on Banking Supervision, as well as those of the International Monetary Fund and the Organization for Economic Cooperation and Development. In addition, MOF’s securities supervisors attend meetings of the International Organization of Securities Commissions. MOF and BOJ work closely together to assist troubled institutions to establish policies and provide a plan for resolving crises. They told us that once the two agree on an overall resolution plan, BOJ typically manages cash transactions and provides liquidity when necessary. According to the Bank of Japan Law, BOJ may, with approval from MOF, conduct such activities other than its normal business as are necessary for the maintenance and fostering of the credit system. According to MOF officials, this should also include BOJ making loans to troubled institutions without eligible collateral. Close cooperation and coordination between the two agencies has resulted in MOF supporting all of BOJ’s past decisions, according to MOF officials. Although prefectural governments supervise credit cooperatives, MOF and BOJ can step in to resolve crises affecting troubled credit cooperatives. According to a MOF official, MOF and BOJ recently formulated a resolution plan to prevent a financial crisis involving the Cosmo and Kizu credit cooperatives. BOJ also provided needed liquidity to the two institutions. The Deposit Insurance Corporation of Japan (DIC) was established as a special corporation in 1971 to protect depositors and maintain the stability of the financial system. DIC serves these purposes by insuring individual depositors for up to 10 million yen ($93,800) and by providing financial assistance to facilitate the merger or acquisition of failing financial institutions. DIC is supervised by MOF. Institutions required to be insured include banks (city banks, regional banks, trust banks, long-term credit banks, foreign exchange banks), shinkin banks, credit cooperatives, and labor banks. Agricultural cooperatives, fishery cooperatives, and fishery production cooperatives, due to their special characteristics, are not required to be insured by DIC. Depositors at these institutions are instead protected under a separate system administered by the Savings Insurance Corporation, established in September 1973. The principal functions of DIC include the collection of insurance premiums, payment of insurance claims and advance payments, execution of financial assistance, purchase of assets from failing or failed financial institutions, and management of funds. DIC is headed by a management committee consisting of up to eight members and the corporation’s governor and three executive directors. By law, the governor is appointed by the Finance Minister. The governor appoints the executive directors and committee members, after obtaining approval from MOF. DIC’s administration is handled by its secretariat and the Special Operations Department. The latter was established by the June 1996 amendment to DIC law. In September 1995, DIC secretariat had a staff of 15 employees, but recent legislation substantially increased its staff. As needed, some administrative functions can be delegated to BOJ or to private financial institutions with MOF’s approval. In emergencies, for example, these institutions may be asked to provide staff and other assistance for the processing of claims. DIC insures member institutions through premiums levied on their insured deposits. The premium rate, which is determined by the management committee, requires MOF approval. Before April 1996, member premiums were set at 0.012 percent of insured deposits. In order to build up the deposit insurance fund in preparation for potential future insolvencies, the premium was raised four-fold to 0.048 percent. Furthermore, based on revision in the Deposit Insurance Act, a special premium of 0.036 percent, which is to be paid into the Special Account of DIC, will be assessed for 5 years. Member institutions are required to make half of the annual insurance payments within 3 months and the rest within 9 months of the beginning of the business year. In 1995, the insurance premiums and other revenues that accumulated in the deposit insurance fund represented a small proportion of insured deposits in Japanese banks. As of March 31, 1995, according to DIC, the fund totaled 876 billion yen ($8.23 billion). The value of insured deposits on that date totaled 555.7 trillion yen ($5.2 trillion), which represented 78.2 percent of total deposits in Japanese financial institutions. At March 1995 funding levels, the deposit insurance fund reserves constituted less than 0.16 percent of insured deposits. Financial assistance to a failing institution, which must be approved by MOF, may be provided through grants, loans, deposits, purchase of assets, guarantee of liabilities, or acceptance of liabilities. As of March 1996, the total cost of disposal during the past 4 years amounted to between 2 trillion yen ($19 billion) to $2.5 trillion yen ($24 billion). The deposit insurance fund totaled about 387 billion yen ($3.63 billion) as of March 31, 1996. However, the premium increases required by the June 1996 legislation are expected to raise approximately 2.3 trillion yen ($22 billion) over the next 5 years, according to Japanese officials.
Pursuant to a congressional request, GAO reviewed the Japanese bank regulatory structure and its key participants, focusing on how: (1) the Japanese bank regulation and supervision is organized; (2) Japan's banking oversight structure functions, particularly with respect to bank licensing, regulation, and supervision; (3) Japanese banks are monitored by their supervisors; and (4) participants handle other financial system responsibilities. GAO found that: (1) in Japan, two entities, the Ministry of Finance (MOF) and the Bank of Japan (BOJ), are responsible for ensuring the safety and soundness of the nation's banking system; (2) MOF, as a governmental agency, has the sole responsibility for licensing banking institutions and for developing and enforcing banking regulations; (3) in addition to its power to order business suspensions and to rescind a bank's license, MOF can seek the imposition of fines, and, in some cases, imprisonment as enforcement measures; (4) to fulfill its responsibility stipulated in the Bank of Japan Law, BOJ has contractual arrangements with 700 financial institutions, including all commercial banks, that allow it to examine these financial institutions and provide advice; (5) over the period 1990 to 1994, MOF and BOJ have examined approximately 500 banks annually, and, although MOF and BOJ do not regularly share information obtained during their separate on-site monitoring visits to the same banks, they do work together on a case-by-case basis to resolve crisis situations; (6) in connection with its responsibility to maintain the financial system's stability, BOJ is the lender of last resort and, as the central bank, BOJ can provide funds to banks in trouble or to the system as a whole if there is no alternative financial provider of liquidity to prevent a systemic crisis, and such liquidity is needed; (7) under the Bank of Japan Law, BOJ sets monetary policy and the interest rate at which it loans or discounts bills for its client banks; and (8) MOF and BOJ share responsibilities for such functions as failure resolution and representing Japan's interests in international forums.
The Postal Service is the nation’s largest civilian employer with approximately 861,000 employees as of the end of fiscal year 1996, most of whom process and deliver mail and provide postal products and services to customers, such as selling stamps and shipping parcels. According to the Service’s database, the total number of postal employees has increased from about 818,000 employees at the end of fiscal year 1993 to about 861,000 employees at the end of fiscal year 1996, an increase of about 5 percent. As shown in table 1, of the approximately 861,000 postal employees, 86 percent were career employees and 14 percent were noncareer employees. Most postal employees were represented by four labor unions and were called “bargaining unit” or “craft” employees. As shown in table 2, the four unions that represented the interests of most bargaining unit employees included (1) the American Postal Workers Union (APWU), (2) the National Association of Letter Carriers (NALC), (3) the National Postal Mail Handlers Union (Mail Handlers), and (4) the National Rural Letter Carriers’ Association (Rural Carriers). The two largest unions are APWU and NALC. Although union membership is voluntary, approximately 80 percent of those represented by the four major unions have joined and pay dues. Also, within the Postal Service, supervisors, postmasters, and other managerial nonbargaining personnel are represented by three management associations, including (1) the National Association of Postal Supervisors (NAPS), (2) the National Association of Postmasters of the United States (NAPUS), and (3) the National League of Postmasters (the League). Unlike craft unions, management associations cannot bargain with postal management. However, the Postal Service is required under the Postal Reorganization Act (PRA) of 1970 to consult with and recognize these associations. NAPS represents all supervisors and lower level managers, except those at headquarters and area offices, for a total of about 35,000 employees as of the end of fiscal year 1996. Also, as of the end of fiscal year 1996, approximately 26,000 postmasters and installation heads were represented by NAPUS and the League. Since 1970, many postmasters have belonged to both organizations, which address issues of interest to all postmasters. In September 1994, we reported that various labor-management relations problems persisted on the workroom floor of postal facilities. We found that such problems were long-standing and had multiple causes that were related to adversarial employee, management, and union attitudes; autocratic management styles; and inappropriate and inadequate performance management systems. In part, these problems were identified through our analysis of the results of an employee opinion survey administered by the Service in 1992 and 1993, in which employees expressed their opinions about its strengths and shortcomings as an employer. Generally, craft employees believed that managers and supervisors did not treat employees with respect or dignity and that the organization was insensitive to individual needs and concerns. The concerns of supervisors and craft employees who worked in mail processing plants focused mainly on (1) the insensitive treatment of employees who were late or absent from work; (2) the lack of employee participation in decisions affecting their work; and (3) the perception that some employees were not held accountable for their performance, leading to perceptions of disparate treatment. Also, managers, supervisors, and craft employees expressed dissatisfaction with the Service’s performance management and recognition and reward systems because they generally believed that (1) performing their jobs well just got them more work, (2) high levels of performance were not adequately recognized or rewarded, and (3) poor performance was too often tolerated. In 1994, we reported that these problems had not been adequately dealt with, mainly because labor and postal management leadership at the national and local levels were unable to work together to find solutions. We also reported that the effects of such problems were multiple and included poor quality of work life for postal employees and higher mail processing and delivery costs for the Postal Service. Furthermore, in our 1994 report, we stated that despite the efforts of the Service and its major labor unions and management associations, attempts to improve labor-management relations on the workroom floor had met with limited success. We recommended in the report that the Service take various actions to try to improve employees’ working conditions and its overall performance. Generally, the recommendations involved some of the following provisions. Improve labor-management cooperation by having the Service, the four unions, and three management associations develop and sign a long-term (at least 10 years) framework agreement that would establish the overall objectives and approaches for demonstrating improvements in the workplace climate. Also, to help ensure that such an agreement can be reached in a timely manner, consider arranging for outside assistance to learn alternative negotiation techniques that could help resolve disputes outside of binding arbitration. Improve the workplace environment by training supervisors to promote teamwork, recognize and reward good performance, and deal effectively with poor performers; and by training employees in team participation efforts that are focused on serving the customer through the continuous improvement of unit operations. Establish employee incentives by recognizing and rewarding employees and work units on the basis of performance. Improve mail processing and delivery operations by testing various approaches for improving working relations, operations, and service quality and evaluating the results of such tests. The objectives of our review were to (1) determine the status and results of the Postal Service’s progress in improving various labor-management relations problems identified in our 1994 report, including how the Service implemented 10 specific improvement initiatives; and (2) identify any approaches that could help the Service and its unions and management associations achieve consensus on how to deal with the problems we discussed in our 1994 report. To identify the improvement initiatives mentioned in the first objective, we reviewed various GAO and postal documents, including our 1994 report, the unions’ collective bargaining agreements, and documents prepared by the Service that described the goals and results of specific improvement initiatives. Using this information, we developed a list of 32 initiatives that the Service, the 4 labor unions, and 3 management associations had piloted or implemented to try to improve the postal workplace environment. Given time and resource limitations, we determined that detailed follow-up on all 32 initiatives would be impractical. Thus, starting with the list of 32 initiatives, we established criteria that we believed could help us select specific initiatives from the list that warranted additional followup to determine their status and results. Generally, such criteria were based on (1) the results of discussions on the 32 initiatives with the Postal Service and its unions and management associations, and (2) the extent to which we determined that various initiatives had the potential to address the recommendations in our 1994 report. We discussed the list of 32 initiatives with officials who represented the Service and its unions and management associations to ensure that we had (1) appropriately identified all the initiatives that should be included on our list, and (2) described the initiatives as thoroughly and accurately as possible. The Service and the unions and management associations generally agreed that our list of 32 initiatives included all known postal improvement efforts that had been piloted or implemented. Also, these organizations provided us with additional comments and perspective on the descriptions of specific initiatives. We reviewed the recommendations in our 1994 report to determine the extent to which the 32 initiatives had the potential to address the recommendations. Using the information about the initiatives that we obtained from our discussions with the Postal Service, the unions, and the management associations, we focused our work efforts on 10 of the 32 initiatives that in our judgment appeared to have significant potential to address some of the Service’s labor-management relations problems that we identified, such as the difficulties experienced by supervisors and employees on the workroom floors of various postal facilities. To determine the status and results of the 10 initiatives, we visited the national Postal Service headquarters in Washington, D.C., where we interviewed key postal officials who were responsible for establishing, implementing, and monitoring various labor-management improvement initiatives. These officials included the Vice-Presidents responsible for Labor Relations, Human Resources, and Quality. We also interviewed program officials in these offices to obtain more detailed information on the goals and results of specific initiatives. Furthermore, to obtain information on status and results from officials involved in implementing the 10 initiatives, we spoke with various postal field officials in 4 area offices—the Mid-Atlantic, Northeast, Southwest, and Western areas. These locations were selected because various initiatives had recently been piloted or implemented in these areas. Also, our staff from the Dallas and Denver regional offices were available to visit these areas and discuss such initiatives in person with responsible postal officials. At these locations, we interviewed the officials who were most knowledgeable about labor-management relations activities in the area offices, including the area vice-presidents, the managers for human resources, and labor relations specialists. Also, within the four areas, we interviewed postal officials responsible for (1) processing and delivering mail, which included the managers of processing and distribution plants and managers of remote encoding centers (RECs); and (2) providing services to postal customers, which included district office managers. These officials were close to the activities performed on the workroom floor of postal facilities, which is where the labor-management relations problems that we identified in our 1994 report had become evident. In addition, to address the first objective, we interviewed various union and management association representatives, including national leaders located in the Washington, D.C., area and local representatives in the four area offices we visited. We interviewed these officials to gain their views and insights on (1) the reasons for the persistence of various labor-management relations problems; and (2) the Service’s efforts to implement the 10 improvement initiatives, some of which were intended to address such problems. At the national level, we spoke with the presidents of APWU and NALC as well as the presidents of the Mail Handlers and Rural Carriers unions. In addition, we interviewed the presidents of NAPS, NAPUS, and the League. At the local level, we interviewed various union representatives, including national business agents responsible for union activities in the states covered by the four area offices, local union presidents, and shop stewards. We also spoke with local representatives of the three management associations. As mentioned in the first objective, to determine the overall extent to which the Postal Service and its unions and management associations had progressed in addressing persistent labor-management relations problems, we obtained information on various events that had occurred since the issuance of our 1994 report. Specifically, this information included (1) the results of the most recent contract negotiations between the Service and each of the four major labor unions; (2) data related to postal employee grievances; and (3) efforts by the Service and the unions and management associations to address the recommendations in our 1994 report, such as the Postmaster General’s (PMG) invitation to the other seven organizations to attend a labor-management relations summit meeting and the implementation of various improvement initiatives, including their status and results. To address the second objective, we monitored congressional activities that occurred since the issuance of our 1994 report, including the annual oversight hearings on the Postal Service’s operations required by PRA. In addition, we reviewed pending legislation intended to reform postal laws that was developed by the Chairman of the Subcommittee on the Postal Service, House Committee on Government Reform and Oversight, and introduced in June 1996, and again in January 1997 as H.R. 22. We also reviewed the sections of the Government Performance and Results Act of 1993 (referred to as the Results Act) related to the Postal Service, as well as GAO and congressional documents that provided guidance on implementing the requirements of the Results Act. Finally, to obtain more information on how the Service was using a third party to serve as a facilitator in labor-management discussions as was recommended in our 1994 report, we interviewed the Director of the Federal Mediation and Conciliation Service (FMCS). We requested comments on a draft of this report from the PMG; the presidents of the four labor unions (APWU, NALC, Mail Handlers, and Rural Carriers) and the three management associations (NAPS, NAPUS, and the League); and the Director of FMCS. Of the nine organizations from which we requested comments, six provided written comments, including the Service, the four unions, and one of the three management associations (the League). These written comments are reprinted in appendixes II through VII. The remaining three organizations—FMCS, NAPS, and NAPUS—provided oral comments. The comments are discussed in appropriate sections throughout the report and at the end of the report. We conducted our review from June 1996 through May 1997 in accordance with generally accepted government auditing standards. Since our 1994 report was issued, the Postal Service and its unions and management associations have made little progress in improving long-standing labor-management relations problems. These problems have generally contributed to a sometimes contentious work environment and lower productivity. Such problems may make it more difficult for these organizations to work together to improve the Service’s performance so that it can remain competitive in a dynamic communications market. According to Postal Service information, in fiscal years 1995 and 1996, the Service improved its overall financial performance as well as its mail delivery services, particularly in the delivery time of overnight First-Class Mail. For example, in fiscal year 1996, the Service reported a net income of about $1.6 billion, which was second highest only to its fiscal year 1995 net income of about $1.8 billion. The Service believed that in large part, improved control over its expenses, including savings from automation efficiencies and a restructuring and refinancing of its long-term debt, contributed to the increased income. In addition, the Service reported that its national average of on-time delivery of overnight First-Class Mail reached an all-time high of 89 percent for fiscal year 1996 compared to 86 percent for fiscal year 1995. Although the Service had made financial and First-Class Mail delivery improvements, other data indicated that in some areas, its performance had not improved. For example, the rate of change in the Service’s overall productivity, known as total factor productivity (TFP), has decreased in each of the last 3 fiscal years. TFP includes various performance indicators, such as usage rates of automated mail processing equipment, the growth in the overall postal delivery network, the development of postal facilities, and changes in presorted and prebarcoded mail volumes. Additionally, for fiscal year 1996, the on-time delivery of 2-day and 3-day mail—at 79 and 80 percent, respectively—did not score as high as overnight delivery. Such performance has raised a concern among some postal customers that the Service’s emphasis on overnight delivery is at the expense of 2-day and 3-day mail. Also, although its mail volume continues to grow, the Service is concerned that customers increasingly are turning to its competitors or alternative communications methods. In 1996, mail volume increased by about one-half of the anticipated increase in volume. As discussed in our 1994 report, the Service recognized that it must focus on improving customer satisfaction to enhance revenue and retain market share. Also, the Service recognized that in all likelihood, customers will not remain satisfied in an environment where persistent labor-management relations problems continue to cause employee dissatisfaction. Our recent work has shown little progress within the last few years on addressing long-standing labor-management relations problems, and the sometimes adversarial relationships between postal management and union leadership at the national and local levels have persisted. These relationships have generally been characterized by (1) a continued reliance by three of the four unions on arbitration to settle their contract negotiation impasses with the Service, (2) a significant rise not only in the number of grievances that have been appealed to higher levels but also in the number of grievances awaiting arbitration, and (3) the inability of the Service and the other seven organizations to convene a labor-management relations summit to discuss problems and explore solutions. Various postal, union, and management association officials whom we interviewed said that the problems persist primarily because the leaders of these organizations have been unable to agree on common approaches to solving the problems. As a result, our 1994 recommendation for establishing a framework agreement of common goals and approaches that could help cascade positive working principles and values from top postal, union, and management association officials down throughout the Service’s approximately 38,000 postal facilities nationwide has yet to be implemented. In our 1994 report, we discussed the occurrence of past contract negotiations, which generally took place at the national level between the Service and the four labor unions every 3 or 4 years. Since as far back as 1978, interest arbitration has been used to resolve bargaining deadlocks that occurred during contract negotiations for three of the four unions, including APWU, NALC, and Mail Handlers. Specifically, interest arbitration occurred in 1978, 1984, and 1990 with APWU and NALC, and in 1981 with Mail Handlers. The most recent negotiations occurred for contracts that expired in November 1994 for APWU, NALC, and Mail Handlers, during which interest arbitration was used to settle bargaining deadlocks. In the case of the Rural Carriers, whose contract expired in November 1995, negotiations resulted in the establishment of a new contract without the use of interest arbitration. With APWU, NALC, and the Mail Handlers, the issues that arose in interest arbitration over their most recent contracts were similar to issues that have surfaced at previous contract negotiations. The issues focused primarily on the unions’ push for wage and benefit increases and job security, in contrast to postal management’s push for cost-cutting and flexibility in hiring practices. According to a postal official, such negotiations over old issues that continually resurface have at times been bitter and damaging to the ongoing relationship between the Service and union leadership at the national level. Union officials also told us that a new issue—the contracting out of specific postal functions, also known as outsourcing—has caused the unions a great deal of concern, because they believe that it could affect job security for employees. In his comments on a draft of this report, the president of the Rural Carriers union stated that for the most recent collective bargaining agreement, the negotiating team, including postal and union representatives, held joint training sessions across the country and invited various state and local postal management and craft representatives to participate in the training. The Rural Carriers president believed that this training helped the parties to better negotiate and reach agreement on the language that was included in the most recent contract, which in this instance eliminated the need for the use of an outside arbitrator. Also, the president believed that the training helped provide both union and postal management officials a more thorough understanding of the contract’s requirements. In our September 1994 report, we discussed the problems associated with the grievance/arbitration process, which is the primary mechanism for craft employees to voice work-related concerns. As defined in postal labor agreements, a “grievance” is “a dispute, difference, disagreement, or complaint between the parties related to wages, hours, and conditions of employment.” In our 1994 report, the problems we described included (1) the high number of grievances being filed and the inability of postal supervisors or union stewards to resolve them at the lowest organizational level possible and (2) the large backlog of grievances awaiting arbitration. The process for resolving postal employees’ grievances is similar to that used in many private sector and other public organizations. Generally, according to labor relations experts, a process that is working effectively would result in most disputes being resolved quickly at the lowest organizational level, that is, by the supervisor, employee, and union steward who represents the employee’s interests. Employees as well as the four postal unions that represent them can initiate grievances. Depending on the type of grievance, the process may involve up to 4 or 5 steps, and each step generally requires the involvement of specific postal and union officials. For instance, at each of the first 3 steps in the process, the parties that become involved include lower to higher union and postal management level officials in their respective organizations, such as post offices, mail processing and distribution centers, and area offices. Step 4 in the grievance process occurs only if either the Service or the union believes that an interpretation of the union’s collective bargaining agreement is needed, in which case, national level postal and union officials would become involved. The fifth and final step in the grievance process involves outside binding arbitration by a neutral third party. Generally, at each step in the process, the involved parties are to explore and discuss the grievance to obtain a thorough understanding of the facts. During any of the first 4 steps that occur before arbitration, the grievance may be settled by the parties. If the grievance is not settled, the Service makes a decision in favor of either postal management or the employee. If the Service denies the grievance (i.e., makes a decision in favor of management), the employee or union steward can elevate the grievance to the next higher step in the process until the last step, which concludes the process with a final and binding decision by a neutral arbitrator. Table 3 briefly describes the specific steps of the 5-step process and the key parties involved. A more detailed description of the grievance/arbitration process is included in appendix I. In our 1994 report, we highlighted issues associated with the grievance/arbitration process, including the high number of grievances that had been filed and the inability of supervisors or installation heads and union stewards to resolve them at the step 1 and 2 levels. The Postal Service’s national grievance arbitration database showed that in fiscal year 1994, a total of 65,062 grievances were not settled at the steps 1 and 2 levels and were appealed at the step 3 level, which involved postal management and union officials at the area office level. According to the Service, this number increased to 73,012 in fiscal year 1995 and 89,931 in fiscal year 1996. As indicated in figure 1, in fiscal year 1996, the average rate of step 3 grievances for every 100 craft employees had risen to 13, compared to fiscal year 1994, when the average rate was 10 step 3 grievances for every 100 craft employees. Also, figure 2 indicates that according to Service data, increases had occurred in the number of grievances that were awaiting arbitration by a third-party arbitrator, also referred to as backlogged grievances. Figure 2 shows that the number of backlogged grievances had increased from 36,669 in fiscal year 1994 to 69,555 in fiscal year 1996, an increase of about 90 percent. Figure 3 shows that in fiscal year 1996, the average rate of grievances awaiting arbitration had risen to 10 grievances per 100 craft employees, an increase from the average rate of 6 grievances per 100 craft employees in fiscal year 1994. Generally, the postal management and union officials we interviewed said that the total volume of grievances was too high. However, the views of postal and union officials differed on the causes of this high grievance volume. These officials told us that their views had not changed significantly since we issued our 1994 report. Generally, the officials tended to blame each other for the high volume of grievances being filed and the large number of backlogged grievances awaiting arbitration. In 1994, we reported that from postal management’s perspective, grievances have always been high because union stewards flooded the system with frivolous grievances to demonstrate that they were executing their responsibility to represent employees’ interests. Also, a postal official told us that he attributed the high grievance rate to what he termed an overall “entitlement mentality” on the part of craft employees who believed that they were entitled to file grievances. In contrast, union officials told us that postal management was largely responsible for the huge volume of backlogged grievances. One union official told us that the key problem was not in the filing of grievances by employees but in the inability of lower level postal officials to settle disputes, especially at steps 1 and 2. This situation has often resulted in many grievances being escalated to a higher decisionmaking level and has added to the delays in obtaining such decisions. Also, an APWU official explained that postal management is generally reluctant to settle grievances awaiting arbitration because the backlog benefits postal management. The official told us that postal management can continue to violate the APWU labor agreement with impunity as long as grievances sit in the backlog awaiting an arbitration decision. In his comments, the president of the Rural Carriers union stated that he strongly encourages union members to file only meritorious grievances. The Postal Service and its unions and management associations have been unsuccessful in their attempts to convene a labor-management relations summit that was proposed by the PMG over 2 years ago. In November 1994, the Subcommittee on Federal Services, Post Office, and Civil Service of the Senate Committee on Governmental Affairs held a hearing on labor-management relations in the Postal Service that in large part focused on the information in our September 1994 report. Various witnesses testified at the hearing, including the PMG and the national leaders of APWU, Mail Handlers, Rural Carriers, and NAPS. The PMG extended an invitation to the leaders of the four unions and three management associations to join Service officials in a labor-management relations summit at which postal, union, and management association leaders could explore our recommendations for improving the workroom climate and determine appropriate actions to be taken. The responses from the other seven organizations to the PMG’s invitation were mixed. For instance, around January 1995, the leaders of the three management associations and the Rural Carriers union accepted the invitation. However, the union leaders for APWU, NALC, and Mail Handlers did not. They said they were waiting until the contract negotiations were completed before making a decision on the summit. At the time the invitation was extended, the contracts for these three unions had recently expired, and contract negotiations had begun. After all negotiations were completed for the three unions in April 1996, they agreed to participate in the summit. Given the difficulties initially encountered by the Service in trying to convene a summit, in February 1996, the Postal Service requested the Director of the Federal Mediation and Conciliation Service (FMCS) to assist the Service by providing mediation services in helping to set up the summit meeting. Also, in March 1996, the Chairman of the Subcommittee on Postal Service, House Committee on Government Reform and Oversight, encouraged the FMCS Director to assist the Postal Service by providing such services. According to a postal official, in September and December 1996, the FMCS Director facilitated two presummit meetings that involved representatives from the Service, APWU, and NALC. In January 1997, another meeting was held that involved only the Service, APWU, and NALC officials. Although postal and union officials declined to reveal the specific issues that were discussed at the presummits, they told us that such issues as performance-based compensation, outsourcing of specific postal functions, and grievance resolution will continue to be major concerns. Also, in March 1997, the Director of FMCS told us that another presummit is currently being scheduled to provide the other five affected parties an opportunity to discuss similar issues with the Service. However, as of May 1997 when we completed our review, no summit involving all eight of the parties had taken place, nor was one scheduled. In his comments on a draft of this report, the Director of FMCS provided us updated information on the presummit and summit meetings. APWU, NALC, Rural Carriers, and the League also provided us their comments on the presummit and summit meetings. The Director of FMCS told us that in addition to the presummit meetings held in September and December 1996 with the Service, APWU, and NALC, another presummit meeting was held in June 1997, which was attended by officials from FMCS, the Service, the Mail Handlers and the Rural Carriers unions, NAPS, NAPUS, and the League. The purpose of the presummit was similar to the purpose of the presummit meetings previously held with APWU and NALC, which was to (1) discuss information on labor-management relations problems that was obtained by an outside contractor through interviews with various postal, union, and management association officials; and (2) determine the next steps in attempting to organize a summit meeting that would involve the Service, the four major labor unions, and the three management associations. Generally, the Director believed that the presummit meeting went well and that the stage is now set for what he envisions will be a summit meeting that should provide the eight organizations with a forum for openly discussing the status of labor-management relations and the steps that can be taken to help resolve problems. He also told us that discussions are currently being held with the eight organizations on proposed dates for the summit meeting. The president of APWU told us that the prospects of a summit meeting being convened were not improved when the Service unexpectedly announced its decision to contract out some Priority Mail transportation and processing services to Emery Worldwide Airlines. According to the president of APWU, after one of the presummit meetings, the PMG pledged full communication concerning the Service’s business plans. However, APWU stated that it was not consulted about this decision before it was finalized, and its representatives were disappointed because they believed that the Service did not solicit their views on the merits of such a decision. The president of NALC said that although a summit meeting has not yet been convened, GAO should not use this fact as an indicator of the extent to which labor-management relations problems exist. NALC commented that one of the reasons the summit meeting has not yet occurred was because the timing of the PMG’s suggestion for a summit in November 1994 was not appropriate, given that sensitive and difficult collective bargaining negotiations were about to begin. NALC also stated that some presummit meetings have already been held, which could achieve some positive results. In its comments, the Rural Carriers union pointed out that it was the first organization to accept the PMG’s invitation soon after it was first proposed. Like NALC, the League also commented that the PMG’s attempts to convene a summit with all the employee organizations were thwarted by contract negotiations, and since 1994, a summit with the participation of all four unions and three management associations simultaneously has failed to happen. Since our 1994 report was issued, the Postal Service and the other seven organizations have continued in their efforts to address long-standing labor-management problems by taking actions to implement specific improvement initiatives, such as the program for selecting and training new postal supervisors, known as the Associate Supervisor Program (ASP). Although many postal, union, and management association officials we spoke with believed that some of these initiatives held promise for making a positive difference in the labor-management relations climate, little information was available to measure the results of various initiatives. For the 10 initiatives that we selected for follow-up, table 4 includes brief descriptions of the initiatives, identifies the organizations who participated in the implementation of the initiatives, and indicates the recommendations in our 1994 report to which each initiative is related. As shown in table 4, all 10 initiatives required the participation of the Postal Service. However, the participation of the other seven organizations—that is, the four major labor unions and the three management associations—varied depending on the extent to which employees represented by the unions and the associations were covered by each initiative. For example, the initiative involving the mediation of grievances applied only to employees represented by APWU, because this initiative was established through the 1994 collective bargaining process that occurred between the Service and APWU. Similarly, the Delivery Redesign initiative applied only to employees represented by NALC, because this initiative focused on the work performed by city letter carriers. In his comments on a draft of this report, the president of the League of Postmasters believed that the list of 10 initiatives in our report could be construed to mean that the League had a stronger presence in the implementation of the initiatives than was actually the case. The League mentioned that in most instances, the Service provided the League general information about the initiatives and a timetable of what was to occur in their implementation. During our discussions with Service, union, and management association officials on the 10 improvement initiatives, the officials generally agreed with the overall goals of some of the initiatives. However, the results of our work indicated that in large part, fundamental disagreements among the eight organizations on strategies for implementing specific initiatives continued to hamper their efforts to achieve these goals and improve the overall working climate for postal employees. The purpose of some of these initiatives was generally to improve labor-management relations, thereby enhancing the Service’s performance in providing postal products and services to its customers. During our review, we found that various actions had been taken to implement all 10 initiatives that we reviewed. However, we found it difficult to determine what results, if any, were achieved from 3 of the 10 initiatives primarily because the initiatives were only recently piloted or implemented. Also, for 5 of the 10 initiatives, disagreements among the involved participants on approaches for implementation generally prevented full implementation of these initiatives and full evaluation of their results. In addition, although results were available for 2 of the 10 initiatives, these initiatives were eventually discontinued, primarily because the Service and the other involved participants disagreed over how best to use the initiatives to help improve the postal workplace environment. For three initiatives, results were difficult to determine, primarily because they had only been recently piloted or implemented, which made it too early to fully assess their results. The three initiatives included (1) the Associate Supervisor Program (ASP); (2) the new performance-based compensation system for executives, managers, and supervisors; and (3) CustomerPerfect! In our 1994 report, we recommended that the Service select and train supervisors who could serve as facilitator/counselors and who would have the skills, experience, and interest to treat employees with respect and dignity, positively motivate employees, recognize and reward them for good work, promote teamwork, and deal effectively with poor performers. In an attempt to address this recommendation, the Service established ASP, a 16-week supervisory training program designed to ensure that candidates for postal supervisory positions were sufficiently screened and trained so that after they were placed in supervisory positions, these supervisors would have a solid foundation that could help them work well with employees. A test of ASP was completed in the St. Louis district office in the fall of 1994, after which the test was expanded to include a total of 10 pilot sites. According to a postal official, as of March 1997, about 254 candidates had completed ASP training. Most of these candidates have already been assigned to supervisory positions in various postal locations. The Service expects that by the end of fiscal year 1997, 70 of the Service’s 85 postal district offices will have graduated ASP classes or will have classes ongoing. During our review, the Service was gathering data from the 10 pilot locations to evaluate ASP. For example, in March 1997, according to an official from the Service’s Office of Corporate Development and Training, that office conducted a 3-day ASP workshop to obtain feedback from the program participants, including the trainers, coaches, coordinators, and supervisory candidates who attended ASP training. According to the postal official, all the participants in the workshop commented that ASP was an “incredible success.” In addition, the official told us that a San Francisco post office went from having the worst scores in productivity and the Service’s External First-Class (EXFC) Measurement System to being one of the top post offices in the San Francisco district. The official attributed much of this improvement to the high-quality calibre of the ASP supervisors who had been assigned to the post office. As of March 1997, the Service was still completing the last ASP pilot. Upon completion of the pilot, the Service plans to administer a written survey to all ASP participants to obtain their comments on the content of the ASP training course, including such matters as the extent to which they believe the course met its objectives and whether the ASP instructors were knowledgeable. Also, the participants are to be asked to assess how they have been able to transfer their recently learned knowledge and skills to their current supervisory positions. In addition, the Service plans to distribute a separate written survey to the managers of the new ASP supervisors. In this survey, managers are to be asked to compare the quality of the on-the-job performance of ASP supervisors to supervisors who had not received ASP training. Also, managers are to be asked to evaluate ASP supervisors’ communications and leadership skills as well as their ability to promote and maintain a safe working environment for employees. Finally, the Service plans to collect overall performance data, such as EXFC and productivity scores, to compare a specific postal facility’s performance before receiving ASP supervisors and after receiving such supervisors to try to determine to what extent ASP may have affected the performance of the facility. Various postal, union, and management association officials we interviewed at some of the ASP pilot locations told us that although they believed it was too soon to evaluate the results of the program, they believed it had the potential for providing the Service with more qualified and better trained supervisors. Also, local union officials we spoke with said that they liked the additional training that is to be provided to current postal supervisors under ASP. In our 1994 report, we discussed past problems with the Service’s performance-based incentive systems for managers and supervisors. The problems concerned a system that emphasized providing these employees with merit pay and promotions for achieving a variety of productivity and budget goals. Examples of such goals included requiring supervisors to manage their assigned budgets and control unscheduled employee absences and overtime usage. However, we found that some supervisors emphasized “making their numbers” over maintaining good employee relations. To help address these problems, we recommended in 1994 that the Service should provide incentives that would encourage all employees in work units to share in the tasks necessary for success and that would allow work units and employees to be recognized and rewarded primarily on the basis of corporate and unit performance. To address this recommendation, the Service established a revised compensation system in 1995 for employees under the Postal and Career Executive Service (PCES). Later, in 1996, the system was expanded to cover the Executive and Administrative Schedule (EAS), which includes executives, managers, and supervisors.The purpose of this system was to establish a performance-based incentive system of pay increases and bonuses that would appropriately recognize and reward employees for good performance. The amounts of such increases and bonuses would be based not only on the individual’s performance rating but also on the performance of the individual’s work unit, as well as the performance of the Service as an organization. A key aspect of the revised compensation system is called the Economic Value Added (EVA) variable pay program, which is a program intended to provide employees covered by the new compensation system with bonuses based on specific performance measurements, such as the financial performance of the Service and levels of customer satisfaction. Under EVA, in fiscal year 1996, the Service distributed a total of $169 million in bonuses to a total of about 63,000 postal executives, managers, supervisors, postmasters, and other higher level nonbargaining unit employees. Nationally, the average bonus paid to an executive under PCES amounted to $12,500. Postmasters covered by the new compensation system and higher level professionals, administrative, and technical employees each received a bonus that averaged $3,900. Another important aspect of the new compensation system was the inclusion of work unit and corporate measurements in EAS employees’ merit performance evaluations. For fiscal year 1997, these evaluations are required to include objectives that are aligned with an individual employee’s work unit goals. The objectives must also align with and support the Service’s corporate goals. According to postal officials, this change is intended to (1) enhance EAS employees’ active involvement in setting objectives to support their work units, (2) establish accountability for results, and (3) provide monetary acknowledgment of an individual employee’s contribution to the success of the work unit. Although the leaders of the three management associations supported the concept of a performance-based incentive system, two of the three associations disagreed with the Service on how this system was to be implemented. Specifically, NAPS agreed to endorse the new pay system. However, in contrast, officials from NAPUS refused to endorse the new pay system because they believed “it offered virtually nothing to some of our members.” Also, in its comments on a draft of this report, the League stated that it refused to endorse the new pay system because the means by which the Service implemented EVA precluded most of the Service’s postmasters, including most of the League’s members, from being eligible for bonuses. According to NAPUS and League officials, the Service determined that certain employees who were covered by the requirements of the Fair Labor Standards Act (FLSA), also known as nonexempt employees, should not be eligible to receive EVA bonuses. NAPUS and League officials mentioned that the Service’s decision eliminated about 60 percent of the employees represented by their associations because they were nonexempt employees. A postal official said that in large part, this determination was based on the results of a wage comparability study done recently for the Postal Service in which the wages of postal employees were compared to wages for employees doing similar work in the private sector. The official said that the results of the study showed that nonexempt postal employees were paid from 30 to 60 percent higher wages compared to employees doing similar work in the private sector. Also, the official said that nonexempt employees in private sector organizations with incentive pay programs are generally not eligible to participate in such programs. Furthermore, the official said that since nonexempt employees are entitled to receive overtime pay for work they perform in excess of 40 hours per week, these employees are already sufficiently compensated for their “extra” work. NAPUS and League officials also stated that many of the Service’s nonexempt employees are postmasters who are women and members of minority groups. Furthermore, the presidents of NAPUS and the League told us that within recent months, their associations have filed class-action lawsuits charging that the new compensation system discriminates against women and minorities. The lawsuits, which were filed in November 1996, are still pending as of January 1997, according to management association officials. In their comments on a draft of this report, three organizations—the Rural Carriers union, the League, and NAPS—provided us their insights into this initiative. In his comments, the president of the Rural Carriers union stated that he supported the concept of EVA but had differences with the Postal Service in the application of EVA. He mentioned that at the national level, his union has met to try to determine how the rural carriers’ current compensation system could be revised so that rural carriers could participate in EVA. The president further stated that his union was awaiting an opportunity to participate in EVA, especially since rural carriers’ individual performance goals have always been aligned with their postal units’ goals, which were established under the Service’s CustomerPerfect! system of management. However, the president said that due to the enormous resources that the Service has devoted to the implementation of the Delivery Redesign initiative, it has been unable to provide much assistance to the Rural Carriers union in developing any type of performance pay system in addition to the one that the rural carriers already have. The Rural Carriers president also stated that it is the individual employee who drives customer satisfaction, creates revenue, and increases productivity. As such, he believes that the performance of rural carriers in these areas is already aligned with the concepts of EVA. As previously mentioned, in his comments, the president of the League expressed his concern that less than a majority of postmasters were included under EVA, which caused the League not to support the new pay system. Also, he commented that (1) nonexempt postmasters who receive additional pay for working over 40 hours per week should not be excluded from eligibility for EVA bonuses, because such pay is due these postmasters for additional work and should not be considered a bonus; and (2) when trying to support new programs, such as EVA, the Postal Service has often used the private sector as a basis for comparing the work of postal employees to employees doing similar work in the private sector. However, the League president stated that because the Postal Service is not a private business, the Service should recognize that many postal positions are unique and cannot be compared to positions in the private sector. The president of NAPS told us that he believed some postmasters were overpaid for the work that they did, which included work that oftentimes was done by craft employees, particularly clerks, such as sorting mail and providing over-the-counter products and services to postal customers. CustomerPerfect! In February 1995, the Service implemented CustomerPerfect!, which has been described by the Vice President for Quality as a “management system being constructed and operated by the Postal Service as a vehicle for constructive change.” He told us that CustomerPerfect! is designed to assess and, where necessary, improve all aspects of Service operations so that it can better provide postal products and services to its customers in a competitive environment. Postal officials told us that in fiscal year 1995, two CustomerPerfect! pilots were established in Washington, D.C., and Nashville, TN. Later, in February 1996, eight additional pilot sites were added. A postal official mentioned that these pilots consisted primarily of implementing what the Service called process management, which was described as a systematic approach to continuously assessing, evaluating, and improving the design and management of core work processes, including those that facilitate the processing and delivery of mail products and services to postal customers. A key aspect of this approach involves the collection and use of various service and financial performance data, such as EXFC; EVA; and data on safety in the workplace, including postal vehicle accidents. A postal official mentioned that the Service plans to expand the process management aspect of CustomerPerfect! to all 85 postal performance clusters in fiscal year 1997. According to postal officials, CustomerPerfect! was not specifically designed to address labor-management relations problems. However, they believe it provides an opportunity for management and craft employees to work together on problem-solving teams to improve how the Service accomplishes its overall mission. Postal officials told us that they believed they had good representation from craft employees on several problem-solving teams that have been established. They further stated that all improvement initiatives should be aligned with CustomerPerfect! According to a postal official, in 1995, the Service offered to provide a briefing on the goals of CustomerPerfect! to the four unions and the three management associations. According to a postal official, representatives from two of the four unions—APWU and Rural Carriers—attended the briefing. The postal official told us that Mail Handlers and NALC representatives declined to attend the briefing. Mail Handlers’ officials told us that they had no interest in the briefing, mainly because the Service had already made the decision to implement CustomerPerfect! and did not solicit the union’s input into the development of CustomerPerfect! NALC officials did not identify a specific reason for not attending the CustomerPerfect! briefing. However, they told us that the Service unilaterally terminated the joint Service-NALC improvement initiative called Employee Involvement (EI) and is now emphasizing CustomerPerfect! Representatives from both Mail Handlers and NALC also told us that CustomerPerfect! was forced on the unions with no attempt by the Service to solicit their input into the development of CustomerPerfect! In their comments on a draft of this report, the Rural Carriers union and the League of Postmasters provided us their insights on CustomerPerfect! The president of the Rural Carriers union mentioned that he supported this initiative in concept and that many of his union members have been involved in CustomerPerfect! process management activities. Furthermore, he stated that individual performance goals for rural carriers had always been aligned with a postal unit’s corporate goals under CustomerPerfect! However, his main concern dealt with how rural carriers could participate in EVA. The League commented that because Service goals have been established for each performance cluster, a postal installation that achieves or exceeds its goals will more than likely not receive any recognition for such performance if it is included in a cluster with other installations that have not achieved their goals. According to the League, this situation is not a good one for providing employees incentives nor is it good for morale, customer service, or the Postal Service. For five initiatives, the Service and some of the organizations, especially APWU and NALC, fundamentally disagreed on how specific improvement initiatives should be implemented. As a result, progress in implementing these initiatives was difficult to determine. Furthermore, during our discussions with Service, union, and management association officials on the five improvement initiatives, the officials generally agreed with the overall goals of some of the initiatives. However, in large part, fundamental disagreements among the Service and some of the organizations on strategies for implementing specific initiatives continued to hamper their efforts to achieve these goals and improve the overall working climate for postal employees. The five initiatives included (1) the labor-management relations summit meeting, (2) Delivery Redesign, (3) the labor-management cooperation memorandum of understanding, (4) the mediation of employee grievances, and (5) the crew chief program. As discussed earlier in this report, the first initiative—the PMG’s proposed summit meeting—has not yet taken place, mainly because negotiations on three of the four unions’ most recent contracts caused these unions to decline to attend such a summit until the negotiations were completed. Negotiations for all four unions were not completed until April 1996. Yet, as of May 1997 when we completed our review, the PMG’s proposed summit with all eight organizations had not occurred, nor had it been scheduled. However, preliminary efforts to convene such a summit have occurred. They included presummit meetings in November and December 1996 with APWU and NALC, an additional meeting with APWU and NALC in January 1997, and plans for presummit meetings with the other remaining five organizations. As mentioned previously, we received comments on the summit meeting from five organizations, including FMCS, APWU, NALC, Rural Carriers, and the League. A discussion of their comments, which begins on page 20, has been included at the end of the section of the report entitled “Little Progress Has Been Made in Improving Labor-Management Relations Problems.” One of our 1994 recommendations was for the Service and the unions to jointly identify pilot sites where postal and union officials would be willing to test revised approaches for improving working relations, operations, and service quality. Specifically, we recommended that for city letter carriers, a system should be established that incorporated known positive attributes of the rural letter carrier system, including greater independence for employees in sorting and delivering mail, incentives for early completion of work, and a system of accountability for meeting delivery schedules. In our 1994 report, we said that problems experienced by city carriers were often related to (1) the close supervision imposed on city carriers, which often engendered conflicts between supervisors and carriers, mainly on the amount of time it took for carriers to do their work; and (2) the existence of performance standards for city carriers that tended to discourage carriers from doing their best and completing work quickly. Postal, union, and management association officials we interviewed generally agreed that such problems called for a revision of the city letter carrier system. As discussed in our 1994 report, both the Service and NALC have studied the city letter carrier system to determine how best to revise it. For instance, in 1987, the Service and NALC established a joint task force to study possible changes and improvements in how carrier assignments were designed, evaluated, and compensated. The study was to identify and examine those elements of the rural carrier system that helped avert many of the conflicts common between postal supervisors and city carriers. However, the Service and NALC were unable to reach any agreement on how to change the city carrier assignments. Consequently, in March 1994, the Service and NALC established similar but independent efforts to study possible changes to the city letter carrier system. A national NALC task force reviewed how city routes could be restructured to better serve carriers, customers, and the Service. Under consideration was a suggestion made by the NALC Vice President that NALC consider a route design similar to that used by rural carriers to better deal with changes in office functions and procedures that could threaten city carrier job opportunities. At the same time, the Service had also set up teams to study and propose alternate approaches to the city carrier system, including examining the possibility of adopting the rural carrier approach. However, we found no effort between the Service and NALC to coordinate and consolidate these two studies for addressing the common concerns. According to postal officials, in 1997, after numerous discussions with NALC and with no ultimate agreement on an approach, the Service decided to test some revised processes for the delivery of mail by city letter carriers. These processes are collectively known as Delivery Redesign. The Service’s plan was to use these revised processes as a basis for helping to develop a city carrier delivery system that could enhance mail delivery by (1) reducing friction between supervisors and carriers, (2) providing increased compensation for superior performance, and (3) removing existing disincentives for doing the job well. In addition to the current delivery process, the Service is testing 3 revised delivery processes at 14 selected sites. For example, some sites are to test the separate case and delivery processes under which some carriers would do only casing while others would do only delivery. Also, one of the revised processes is to involve the Service’s implementation of performance standards, also known as standard time allowances, to structure and monitor city carrier performance at these 14 sites. However, the Service is not testing any compensation alternatives for these employees, because it needs agreement from NALC. According to an NALC official, NALC has not agreed to such alternatives, because it considers compensation for city carriers an issue that is most appropriately discussed in the collective bargaining process. A postal official told us that the testing of the revised city carrier delivery processes began in Louisville, KY, in March 1997 and will have started in the other 13 test sites by May 1997. He also told us that although NALC officials were briefed several times (May, July, and September 1996) on Delivery Redesign, they have not endorsed the testing of the revised processes. At the national level, NALC officials declined to comment on the testing; they told us that they believe the issue of delivery redesign is a subject to be decided through the collective bargaining process. However, the officials added that they do not believe that the city letter carrier delivery system should be structured similarly to the evaluated route system used by rural carriers. As we reported in 1994, rural carriers work in environments substantially different from city carriers. As a result, rural carriers generally have more independence in doing their work. Also, the compensation systems for rural and city carriers are different. Rural carriers are salaried workers who do not have to negotiate daily for overtime. City carriers are hourly workers whose daily pay can vary depending on the amount of overtime hours they would be required to work to process and deliver mail on their assigned routes. Two organizations—NALC and NAPS—provided us their comments on the Delivery Redesign initiative. NALC objected to the Service’s implementation of Delivery Redesign, stating that by implementing this initiative, the Service has violated the requirements of NALC’s contract agreement regarding time and work standards for city letter carriers. Also, NALC mentioned that the Service has repeatedly rejected NALC’s invitations to study the city letter carrier system in a cooperative manner. In addition, the president of NAPS told us that he believed that the Delivery Redesign initiative could help improve the city carrier system partly because one purpose of this initiative was to collect enough information to allow the city carrier routes to be evaluated daily instead of annually, which is how rural carrier routes are currently evaluated. In November 1993, the Service and APWU signed a joint memorandum of understanding on labor-management cooperation. The memorandum included various principles that were intended to help the Service and APWU (1) establish a relationship built on mutual trust and (2) jointly explore and resolve issues of mutual interest. An example of one of the principles involved the parties’ commitment to and support of labor-management cooperation at all levels throughout the Service to ensure a productive labor relations climate, a better employee working environment, and the continued success of the Service. Another principle was a statement about the willingness of both parties to jointly pursue strategies that emphasized improving employee working conditions and satisfying the customer in terms of both service and cost. The memorandum did not include any information as to how the Service and APWU planned to measure the results of its implementation. The cooperation memorandum was a “quid pro quo” for another joint agreement signed at the same time, known as the Remote Barcoding System (RBCS) Memorandum of Understanding. Under this agreement, the Service agreed that it would no longer pursue contracting out for certain clerical services (i.e., keying address data) associated with the automated mail processing, or RBCS, functions. Instead, the Service agreed to keep this work in-house, which would primarily be performed at remote encoding centers (RECs). During our visits to various RECs located in the field, most postal officials and union representatives told us that the cooperation memorandum did not generally make any significant difference in their ability to work well together. Rather, they told us that they believed their ability to work cooperatively was attributable primarily to the differences in the nature of the work at RECs, which had clean, office-like atmospheres, instead of in facilities such as plants, which were similar to manufacturing facilities. Also, employees at RECs perform similar types of work (i.e., data entry functions); at other types of postal locations, the work involves a wide range of tasks performed by different employees, including sorting mail, loading and unloading mail trucks, and serving customers. Also, REC managers we interviewed told us that because REC employees had not previously worked in the postal environment, they had no preconceived notions about labor-management relations. Both Service officials and APWU leaders agreed that the labor-management relations memorandum had not accomplished its intent of improving cooperation between the Service and APWU. They told us that the memorandum had generally not lived up to their expectations. Postal officials told us that although they and APWU officials continue to work together, they do not believe that the “far-reaching anticipated effect” of the memorandum has been achieved. Also, although the president of APWU stated that he considered the cooperation memorandum to be a “framework agreement” between the union and the Service, he told us that he believed the Service was not sincere when it signed the memorandum, because the Service continuously violates the spirit of the memorandum. He mentioned that a recent example of this type of violation was that the Service tried to annul both the cooperation memorandum and the RBCS memorandum in 1995. However, an interest arbitrator refused the Service’s request for annulment. In its comments on a draft of this report, APWU agreed that the memorandum had not lived up to its expectations. However, the union stated that cooperation between APWU and the Service exists, as exemplified by the recent establishment of three additional agreements with the Service. These agreements, which were signed by the Postal Service and APWU during the period May through July 1997, were intended to (1) try to significantly reduce or eliminate grievance backlogs; (2) establish a National Labor Relations Board alternative dispute resolution procedure concerning information requests; and (3) provide for the implementation of an administrative dispute resolution procedure to help resolve employee complaints about specific issues, such as pay. APWU included copies of the three agreements as enclosures to its written comments, all of which are included in appendix III. APWU believed that any assessment of the status of postal labor-management relations should include an evaluation of the impact of these agreements, despite the fact that the agreements had only recently been signed by Service and APWU officials. Because these agreements were not available during the period of our review, we could not evaluate their implementation. As a result of the 1994 contract negotiations, APWU and the Service agreed to include in the union’s contract a program of mediation in which parties at local installations could request assistance to help facilitate the grievance/arbitration process and improve the labor-management relationship. The purpose of this mediation program was to address the problem of too many grievances not being settled on the workroom floor. According to a postal official, the Service initially planned to use the mediation program on a test basis as a means of reducing the large backlog of grievances awaiting arbitration. To begin this test, the official told us that as of October 1996, the Service had trained a total of 113 individuals to serve as mediators who could assist in settling grievances awaiting arbitration at pilot sites that were to be selected. However, APWU officials told us that they disagreed with the Service’s plans to test the use of mediators in this manner. They believed that a massive arbitration effort was the best means of reducing the large backlog of grievances awaiting arbitration. According to APWU officials, whenever a large backlog of grievances awaiting arbitration occurs, such an effort should involve sending an arbitrator to that installation to hear all the backlogged grievances. Both postal and APWU officials told us that the details of how the mediation program will be implemented are still under discussion. However, none of the postal or APWU officials we interviewed provided any information on when these discussions were scheduled for completion. In its comments on a draft of this report, APWU stated that after the first joint agreement on mediation was included in the 1994 contract, the Service tried to move ahead and implement its own type of mediation program instead of trying to reach a joint understanding with APWU on how the program should be implemented. Nevertheless, as previously mentioned in our discussion on the Joint Labor-Management Cooperation Memorandum, in May 1997, APWU and the Service established another agreement that includes provisions for using various types of mediation processes to help (1) eliminate the current grievance backlog, (2) prevent future reoccurrences of such backlogs through the improvement of labor-management relations, and (3) address the root causes that generate grievances. A copy of this agreement is included in appendix III. In our 1994 report, we discussed the Service’s testing of the crew chief program, a program that was designed to allow craft employees to take greater responsibility for moving the mail. The purpose of this program was to address craft employees’ concerns that they had only limited involvement in the daily decisions affecting their work because management generally did not value their input on how to organize and accomplish the work. During 1990 interest arbitration proceedings, APWU proposed the crew chief concept because it believed the organization of postal work was outdated and inefficient and created an unnecessarily adversarial and bureaucratic work environment. The Service was not opposed to the concept but felt there were too many questions, such as how crew chiefs would be selected, that needed to be addressed before any agreement could be considered. As a result of these proceedings, the Service and APWU entered into a June 1991 Memorandum of Understanding to pilot test the crew chief program with clerk craft employees. Beginning in July 1992, a pilot of the program was conducted in a total of 12 postal locations, including 7 mail processing and distribution plants and various post offices in 5 postal districts. These sites were jointly selected by the Service and APWU from a list of sites that were willing to participate in the program. At the pilot sites, crew chiefs were chosen on the basis of seniority or selected by a joint committee of union and postal employees and were given 40 hours of on-site training. Each of the sites had the option of adopting an “unelection” process whereby employees could vote every 90 days to replace their crew chief. Postal supervisors were prohibited by the APWU collective bargaining agreement from doing craft work, but as a craft employee, the crew chief could work with unit employees. However, unlike supervisors, crew chiefs could not approve leave for employees or take disciplinary actions against them. In 1994, we reported that the pilot of the crew chief program was completed in March 1994. However, according to program participants, including managers and supervisors as well as crew chiefs whom we interviewed at specific postal sites, the results of the pilot were mixed. On the one hand, some program participants told us that they believed craft employees were generally more comfortable taking instructions from, and expressing their concerns to, crew chiefs rather than to supervisors. Participants also told us that crew chief positions alleviated some of the increased pressure on supervisors that resulted from the Service’s 1992 reduction in supervisory staffing. However, on the other hand, we found that the crew chief program did not address some important issues that caused workfloor tensions between supervisors and employees. Specifically, the crew chief program did not give all employees more control over their work processes; it empowered only the crew chief. Also, this program did not provide any new incentives for team performance or procedures for holding employees and supervisors accountable for poor performance. As discussed in our 1994 report, supervisors and crew chiefs often did not fully understand their respective roles and responsibilities. They said that the duties that supervisors allowed crew chiefs to perform varied significantly among the postal pilot sites and also among the work tours at specific sites. Supervisors and crew chiefs also said that selecting the crew chief on the basis of seniority did not ensure that the best-qualified person was selected for the position. Some supervisors perceived crew chiefs as a threat to their job security, so they bypassed them and dealt directly with employees. Also, NAPS did not support the crew chief program, mainly because its president considered crew chiefs to be another layer of management. The existing supervisors at the crew chief test sites were left in place, and the Service did not redefine their roles in a self-managed work environment. In recent interviews, a postal official said that although the Service believed that crew chiefs in post offices generally had a positive effect on postal operations, it did not believe that similar positive outcomes were evident in the plant locations that used crew chiefs. Furthermore, this official told us that after the completion of the pilot, the topic of crew chiefs was set aside because of the 1994 contract negotiations with APWU. He also told us that after the negotiations were completed, discussions began again on the results of the crew chief pilot. However, according to postal and APWU officials, they were still evaluating these results as of February 1997. Two employee organizations—APWU and NAPS—provided us their comments on the crew chief program. According to APWU, a study of the program by an individual at Wayne State University revealed that morale and job satisfaction had improved at virtually all the sites that used crew chiefs and that such improvements were more evident at postal installations that provided retail services than at mail processing installations. Also, APWU mentioned that the Service still resists the crew chief program because APWU believes that the Service is intent on retaining what APWU termed “. . . the same bureaucracy and administrative hierarchy that has existed since reorganization with all its consequent ramifications for continued ’contentiousness’.” APWU stated that it considered the crew chief program to be successful and expressed considerable concern that the Service still resisted it. Moreover, APWU commented that we ignored the fact that crew chiefs—also referred to by APWU as negotiated group leaders—were being successfully used at RECs, and the overall performance of the RECs has exceeded expectations. However, our purpose for including RECs in our review was to determine the extent to which the joint labor-management cooperation memorandum had been implemented, not to review the overall operations of RECs. Thus, we did not review the use of crew chiefs or negotiated group leaders at RECs or the overall performance of RECs. The president of NAPS also commented on the crew chief program, stating that his organization generally did not favor the program, mainly because it empowered only one person on the mail processing team—the crew chief, who often functioned as a second supervisor in addition to the team’s primary supervisor. The president believed that all employees on a mail processing team should be empowered to work together to do whatever it takes to process and distribute the mail efficiently and that only one team supervisor was needed to coordinate mail processing and distribution activities. By empowering all the team’s employees in this manner, the NAPS president believed that a crew chief was not needed. For two initiatives, efforts to continue implementing them were hampered primarily by disagreements among the Service and the other involved participants over how best to use the initiatives to help improve the postal workplace environment. Also, according to postal officials, a lack of union participation in one of the two initiatives generally caused the Service to discontinue its use. The two initiatives included (1) the employee opinion survey (EOS) and (2) the Employee Involvement (EI) program. The nationwide annual employee opinion survey (EOS), which began in 1992 and continued through 1995, was a voluntary survey that was designed to gather the opinions of all postal employees on the Postal Service’s strengths and shortcomings as an employer. Postal officials told us that such opinions have been useful in helping the Service determine the extent of labor-management problems throughout the organization and make efforts to address such problems. According to postal officials, problems with the EOS arose during negotiations on some of the 1994 union contracts. Both postal and union officials stated that during those negotiations, the Service used our 1994 report, which included the results of the 1992 and 1993 EOS, in its discussion of various contract issues with three unions (APWU, Mail Handlers, and NALC). In our 1994 report we found that past EOS results have indicated that many mail processing and distribution employees who had responded to the survey said that they (1) were generally satisfied with their pay and benefits, (2) liked the work they did, and (3) were proud to work for the Postal Service. However, a postal official stated that the Service’s use of our findings, which were partially based on the EOS results, caused problems with some union officials. He told us that NALC boycotted the 1995 EOS because it believed EOS was inappropriately used during the 1994 contract negotiations. According to postal officials, NALC and APWU encouraged their members not to complete future surveys. Also, the officials told us that although the Mail Handlers and Rural Carriers unions did not urge their members to boycott future surveys, the resistance by APWU and NALC members was enough to skew the results of the EOS and render it almost useless. This action by the unions led to the discontinuance of the EOS in 1996. Also, officials from a management association told us that they did not believe the results of employee surveys should be used in determining management pay levels, because they believed craft employees have manipulated, and would continue to manipulate, surveys to discredit their supervisors. In their comments on a draft of this report, four organizations—APWU, NALC, Mail Handlers, and the League—provided us their insights on EOS. Three of the four organizations—APWU, NALC, and Mail Handlers—did not support the implementation of EOS nor the use of its results. Specifically, these three organizations objected to what they believed was the Service’s inappropriate use of EOS results as a basis for justifying its position in collective bargaining. APWU stated that it generally does not object to employee surveys and did not object to EOS until postal officials began using the survey’s results in the 1994 contract negotiations to justify their bargaining positions, which in part led to the APWU boycott of the 1995 EOS. NALC stated that although surveys such as EOS can be useful tools, they can produce (1) data that can be manipulated, (2) results that can be misinterpreted, and (3) conclusions that may be inappropriately used. Although NALC stated that it was willing to work with the Service in developing and implementing an employee survey, it believed that the Service’s unilateral implementation of EOS and its inappropriate use of results during contract negotiations undermined the credibility of EOS. Also, Mail Handlers stated that during 1994 contract negotiations, the Service used EOS results to support its position that union members did not need increased wages and benefits. As a result, in July 1995, the Mail Handlers union stated that it adopted a resolution, which included its reasons for objecting to EOS. According to the Mail Handlers union, the resolution stated that Mail Handlers did not support EOS and requested that those of its members who chose to complete the 1995 EOS should do so in a manner that would render it useless. In addition, the League commented that although the Service implied that EOS was discontinued because of a lack of union participation, the League understood that it was because both the Service and the unions had used EOS data to support their positions on various issues such as pay and benefits. As discussed in our 1994 report, the Employee Involvement (EI) initiative began in 1982 and was designed to end or alleviate the adversarial relationship in the workplace climate. Through the implementation of EI, the Service and NALC intended to (1) redirect postal management away from the traditional authoritarian practices toward a style that would encourage employee involvement and (2) enhance the dignity of postal employees by providing them with a chance for self-fulfillment in their work. According to a postal official, EI was discontinued, primarily because it no longer contributed significantly to the goals of the Service and was unable to address the root causes of conflict in the workplace or foster the empowerment of city letter carriers. The postal official told us that when EI was first established in 1981, it accomplished some positive results in the workplace. However, in recent years, EI has not helped to improve the postal workplace as much as it once did. The official told us that a key reason was that for the past 3 years, all joint EI meetings between Service and NALC officials were cancelled due to negotiations over NALC’s most recent contract. The official also told us that during 1994 contract negotiations, the Service and NALC disagreed over various aspects of EI, including what type of work the 400 trained EI facilitators should perform. According to the official, these facilitators were working in various postal field locations as full-time EI facilitators, which prevented them from performing functions directly related to mail processing and delivery. NALC disagreed with the Service’s reasons for discontinuing EI. An NALC official characterized EI as a remarkable achievement in labor-management cooperation. He mentioned that EI represented one of the Service’s and NALC’s earliest efforts to replace the traditional authoritarian and hierarchical work processes in the postal workplace climate with a system of increased cooperation and enhanced worker empowerment. Although the Service decided to discontinue its support of EI, the NALC official told us that the union intends to continue working to reinstate the EI program. In its comments on a draft of this report, NALC reiterated its concern about the Service’s April 1996 termination of EI, which NALC termed “. . . an extraordinarily regressive act.” Shortly after EI was terminated, the president of NALC mentioned that he had written to the Vice President of Labor Relations for the Postal Service to protest the action. Also, the NALC president stated that he believed the timing of EI’s termination, which coincided with the time that the Delivery Redesign initiative was begun, indicated that in its approach to dealing with NALC, the Service had moved from a position of jointness and cooperation to one of domination and confrontation. The president stated further that he believed the Service’s revised approach should be an issue of greater concern to us than any of the initiatives we had selected to review. As noted in the Objectives, Scope, and Methodology section, we selected the initiatives included in this review based primarily on (1) discussions with the Postal Service and its unions and management associations and (2) the extent to which the initiatives had the potential to address our previous recommendations. EI was not included in our review. Improving labor-management relations at the Postal Service has been and continues to be an enormous challenge and a major concern for the Postal Service and its unions and management associations. With the significant future challenges it faces to compete in a fast-moving communications marketplace, the Service can ill afford to be burdened with long-standing labor-management relations problems. We continue to believe that in order for any improvement efforts to be sustained, it is important for the Service, the four unions, and the three management associations to agree on common approaches for addressing labor-management relations problems so that positive working principles and values can be recognized and encouraged in postal locations throughout the nation, especially in locations where labor-management relations are particularly adversarial. Our work has shown that there is no clear or easy solution to improving these problems. However, continued adversarial relations could lead to escalating workplace difficulties and hamper the Service’s efforts to achieve its intended improvements. The limited experience the Postal Service and its unions and management associations have had with FMCS in an attempt to convene a postal summit meeting, although not fully successful to date, nonetheless has suggested that the option of using a third-party facilitator to help the parties reach agreement on common goals and approaches has merit. The use of FMCS, as recommended in our 1994 report, was requested by the PMG in early 1996 and encouraged by the Chairman of the House Subcommittee on the Postal Service in March 1996. Although efforts to arrange a summit continue, the window of opportunity for developing such an agreement may be short-lived because of contract negotiations involving three of the four unions whose bargaining agreements are due to expire in November 1998. As previously mentioned, in 1994, after formal contract negotiations had begun for APWU, Mail Handlers, and NALC, these unions were generally reluctant to engage in discussions outside the contract negotiations until they were completed. A second approach to improving labor-management relations was included in the postal reform legislation introduced by the Chairman of the House Subcommittee on the Postal Service in June 1996 and reintroduced in January 1997. Under this proposed legislation, a temporary, presidentially appointed seven-member Postal Employee-Management Commission would be established. The proposed Commission would be responsible for evaluating and recommending solutions to the workplace difficulties confronting the Service and would prepare its first set of reports within 18 months and terminate after preparing its second and third sets of reports. The Commission would include two members representing the views of large nonpostal labor organizations; two members from the management ranks of similarly sized private corporations; and three members well-known in the field of employee-management relations, labor mediation, and collective bargaining, one of whom would not represent the interests of either employees or management and would serve as the chair. Some concerns have been raised that the proposed Commission would not include representatives of the Postal Service or its unions or management associations, and thus the results of its work may not be acceptable to some or all of those parties. In July 1996, representatives of each of the four major unions testified before the House Subcommittee on the Postal Service that the Commission was not needed to solve labor-management relations problems at the Postal Service. They said that the affected parties should be responsible for resolving the problems. Finally, the Government Performance and Results Act provides an opportunity for Congress; the Postal Service, its unions, and its management associations; and other stakeholders with an interest in postal activities, such as firms that use or support the use of third-class mail for advertising purposes and firms that sell products by mail order, to collectively focus on and jointly engage in discussions about the mission and proposed goals for the Postal Service and the strategies to be used to achieve desired results. Such discussions can provide Congress and the other stakeholders with opportunities not only to better understand the Service’s mission and goals but also to work together to develop and reach consensus on strategies to be used in attaining such goals, especially those that relate to the long-standing labor-management relations problems that challenge the Service. The Postal Service is currently developing its strategic plan as required by the Results Act for submission to Congress by September 30, 1997. The plan is intended to provide a foundation for defining what the Service seeks to accomplish, identify the strategies the Service will use to achieve desired results, and provide performance measures to determine how well it succeeds in reaching result-oriented goals and achieving objectives. Also, as part of this process, the Results Act requires that the Service solicit the views of its stakeholders on the development of its strategic plan and keep Congress advised of the plan’s contents. The Service published notices in the Federal Register asking the public for input on its proposed plan no later than June 15, 1997. This comment period provided an opportunity for those who might be affected by decisions relating to the future of the Postal Service to voice their views on the strategies to be used by the Postal Service. Furthermore, the strategic plan is intended to be part of a dynamic and inclusive process that fosters communication between the Service and its stakeholders—including the unions and management associations—and that can help clarify organizational priorities and unify postal employees in the pursuit of shared goals. FMCS. We received written comments from the Postal Service, the four major labor unions, and one of the three management associations—the League of Postmasters. We also obtained oral comments from the Director of FMCS and the presidents of NAPS and NAPUS. The comments we received from the 9 organizations included diverse opinions on the 3 sections of the report that dealt with (1) the report’s basic message that little progress had been made in improving labor-management relations problems; (2) the implementation of and the results associated with the 10 improvement initiatives; and (3) the opportunities that are available to help the Service, the 4 unions, and the 3 management associations reach agreement on how to address labor-management relations problems. Regarding the report’s basic message, although the nine organizations generally agreed that little progress had been made and labor-management relations problems have persisted, some of them expressed different opinions on the reasons why such problems continued to exist. With respect to the 10 improvement initiatives, many of the organizations expressed different opinions about such matters as how some of the initiatives were implemented, including what role the organizations played in their implementation, and what results were associated with specific initiatives. Concerning the opportunities that could be used to help the Service, the four unions, and the three management associations agree on how to address persistent labor-management relations problems, the organizations expressed various opinions about the potential of these opportunities for helping the organizations resolve such problems. Also, some of the organizations believed that entities outside the Postal Service, including Congress, should not be involved in discussions about postal labor-management relations problems. Some of these organizations believed that the parties directly affected by such problems, namely the Service, the four unions, and the three management associations, should be the ones to decide how best to address the problems. We understand that the nine organizations had different perspectives on these matters. However, we believe that the diversity of their opinions reinforces the overall message of this report and provides additional insight as to why little progress in improving persistent labor-management relations problems has been made since the issuance of our September 1994 report. We continue to believe that the establishment of a framework agreement, as recommended in our 1994 report, is needed to help the Service, the unions, and the management associations agree on the appropriate goals and approaches for dealing with persistent labor-management relations problems. Also, we believe that opportunities such as the ones discussed in this report, including the use of a third-party facilitator, the proposed labor-management relations commission, and the requirements of the Government Performance and Results Act, can provide the Service, the unions, and the management associations alternatives to explore in trying to determine how best to reach agreement on dealing with such problems, so that the Service’s work environment can be improved and its competitive position in a dynamic communications marketplace can be maintained. We incorporated comments where appropriate from all nine organizations, including the Service, the four unions, the three management associations, and FMCS, as their comments pertained to the three major sections of the report in which we discussed our findings. We have included copies of the written comments we received from the Postal Service, APWU, NALC, Mail Handlers, Rural Carriers, and the League of Postmasters, along with our additional comments, as appendixes II through VII, respectively. In the section of the report entitled “Little Progress Has Been Made in Improving Labor-Management Relations Problems,” which begins on page 10, we discussed the report’s basic message that these problems, which were identified in our 1994 report, still persisted. Representatives from the nine organizations generally agreed that labor-management relations problems continued to exist in the Postal Service and that little progress had been made in addressing them. In their written comments, some organizations discussed in more detail the reasons why they believed such problems still existed. Among other things, these reasons included concerns about the Postal Service’s contracting out of some postal functions, the lack of trust between employees and managers, and the importance of permitting the Postal Service and its unions and management associations to operate without interference from outside parties. In addition to these written comments, the president of NAPS told us that he believed the reason for the continued problems was that most employee organizations were more concerned with trying to preserve their own existence rather than trying to help ensure the future security of the Postal Service as an organization. He believed that it was time for the unions and the management associations to begin educating their members about the need for these organizations to focus on maintaining the existence of the Service because, without the Service, the employee organizations would have no reason to exist. In the section of the report entitled “Status and Results of Initiatives to Improve Labor-Management Relations,” which begins on page 21, we presented information on the efforts that the Service, the 4 labor unions, and the 3 management associations have made to implement 10 improvement initiatives. In this section, we included the comments that we received from some of these organizations, such as APWU, NALC, and NAPS, which provided us their insights about specific improvement initiatives, including the crew chief program, the postal employee opinion survey, and EI. The organizations that commented on specific initiatives provided information that generally (1) discussed the extent to which they participated in helping to develop and implement specific initiatives, (2) described the outcomes that they believed resulted from specific initiatives, and (3) identified the reasons why they believed specific initiatives had not achieved their intended outcomes. In the section of the report entitled “Continued Need to Improve Labor-Management Relations,” which begins on page 44, we discussed opportunities that are currently available for the Service, the 4 unions, and the 3 management associations to use in attempting to reach agreement on strategies for improving labor-management relations problems. The opportunities we discussed in our report included (1) the continued use of a third-party facilitator, such as FMCS, to help these eight organizations agree on common goals and approaches; (2) the establishment of a presidentially appointed commission of outside experts to evaluate and recommend solutions to labor-management relations problems; and (3) the inclusion of the eight organizations, Congress, and other parties interested in postal activities in a dialogue as part of the Government Performance and Results Act that can help all postal stakeholders focus on defining the Service’s mission and goals and the means to achieve such goals. Some of the organizations provided us their comments on one or more of these three issues. Concerning the first issue about the use of a third-party facilitator to help the eight postal parties reach agreement, we received comments from five organizations. However, instead of the third-party facilitator, their comments generally focused more on the PMG’s proposed summit meeting for which the Director of FMCS has been performing the facilitator role in attempting to convene the meeting. We received comments on the meeting from FMCS, APWU, NALC, Rural Carriers, and the League, all of which provided different perspectives on the anticipated merits of the proposed summit meeting. The information we obtained about the meeting is included in the section of the report entitled “Little Progress Has Been Made in Improving Labor-Management Relations Problems.” This section includes information on the summit meeting, which begins on page 19, and the comments on the meeting that we received from the five organizations. The second issue involved the establishment of the seven-member labor-management relations commission that was included in proposed legislation by the House Subcommittee on the Postal Service. We received comments on this issue from the Postal Service and one of the three management associations—the League of Postmasters. In its comments, the Service endorsed the proposal by the House Subcommittee on the Postal Service that a commission be established to evaluate and recommend solutions to labor-management relations problems. The Service believed that it would prefer to support the work of such a commission rather than engage in continued recriminations and finger-pointing with the unions on why so little progress in addressing such problems had been made, which has often occurred in the past. The Service had two suggestions for the Subcommittee’s consideration in the establishment of the commission. First, the Service suggested that a shorter time period (i.e., 1 year instead of 3-1/2 years) be established for the commission to complete its work. The Service stated that 3-1/2 years was too long a period of time for the commission to evaluate and recommend solutions to persistent labor-management relations problems, mainly because a significant amount of work by us and others has already been done to identify that such problems continue to exist and that this work should not have to be repeated. Second, the Service suggested that the commission be established under the auspices of an independent academic organization to help ensure that (1) the commission’s work could be started as quickly as possible without having discussions about its establishment tied to discussions about the postal reform legislation and (2) the chances that the commission’s recommendations would be accepted could be increased. In its comments on a draft of this report, the League mentioned that as described in the proposed legislation, the proposed commission would not include representatives of postal employees or customers. The League also expressed concern about the fact that the members of the commission would be making decisions about how to resolve labor-management relations problems without being responsible for ensuring that such problems were resolved. Recent discussions we held with the presidents of the four unions and the remaining two of the three management associations (i.e., NAPS and NAPUS) confirmed that they are also concerned about the composition of the commission as well as the need for it. Given these opinions, the Service expressed a concern that without the involvement of an independent body, implementation of the commission’s recommendations may be difficult to accomplish. Concerning the third issue—the opportunity for parties interested in postal activities to engage in a dialogue as part of Results Act requirements—only APWU provided comments. According to the president of APWU, he received a copy of the Postal Service’s draft strategic plan around June 16, 1997, which he considered rather late. The Results Act required that the final plan be submitted to Congress no later than September 30, 1997. Accordingly, the APWU president believed that such lateness reduced the value of his input on the draft plan and led him to question whether the Service’s attempt to seek input was sincere. As arranged with you, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the Ranking Minority Member of your Subcommittee, the Chairmen and Ranking Minority Members of the House and Senate oversight committees, the Postmaster General, and to other interested parties. Copies will also be made available to others upon request. If you or your staff have any questions about this report, please contact me on (202) 512-4232; or Teresa Anderson, Assistant Director, on (202) 512-7658. Major contributors to this report are included in appendix VIII. As defined in postal labor agreements, a “grievance” is “a dispute, difference, disagreement, or complaint between the parties related to wages, hours, and conditions of employment.” The Postal Service’s process for resolving grievances is similar to that used in the private sector and other public organizations. Depending on the type of grievance, the process may involve up to 4 or 5 steps, and each step generally requires the involvement of specific postal and union officials. For instance, at each of the first 3 steps in the process, the parties that become involved include lower to higher union and postal management level officials in their respective organizations, such as post offices, mail processing and distribution centers, and area offices. Step 4 in the grievance process occurs only if either the Service or the union believes that an interpretation of the union’s collective bargaining agreement is needed, in which case national level postal and union officials would become involved. The fifth and final step in the grievance process involves outside binding arbitration by a neutral third party. Both employees and the four unions that represent them can initiate grievances. The 5 steps of the process are described below. The employee or union steward discusses the grievance with the supervisor within 14 days of the action giving rise to the grievance. The supervisor renders an oral decision within 5 days. The union has 10 days to appeal the supervisor’s decision. The grievance is filed in writing on a standard grievance form with the installation head or designee. The installation head and the union steward or representative meet within 7 days. The installation head’s decision is furnished to the union representative within 10 days. The union has 15 days to appeal the installation head’s decision. The union files a written appeal with the Area Office’s director of human resources. The union’s Area representative meets with the representative designated by the Postal Service within 15 days. The Postal Service’s step 3 decision is provided to the union representative within 15 days. The union has 21 days to appeal the decision to arbitration (step 5). If either party maintains that the grievance involves a matter concerning the interpretation of the National Agreement, the union has 21 days to refer the matter to the national level of the union and the Postal Service. Representatives of the national union and the postal headquarters meet within 30 days. The Postal Service issues a written decision within 15 days. The union has 30 days to appeal the Postal Service’s decision to arbitration. An arbitrator is selected and a hearing is scheduled under the terms of the National Agreement, depending on the type of grievance. The arbitrator’s decision is final and binding. The following are GAO’s comments on specific issues included in the letter dated July 21, 1997, from the American Postal Workers Union (APWU). Other issues that were discussed in the letter have been included in the report text. 1. We do not agree with APWU’s assessment that the basic premise of the report—that labor-management relations problems have generally contributed to a sometimes contentious work environment and lower productivity—was misleading. In discussing these issues, we did not suggest, as APWU stated, that such an environment resulted from some top down directive from the unions. Rather, as discussed in our 1994 report, such an environment appeared to have resulted from various problems, including autocratic management styles, adversarial employee and union attitudes, and inappropriate and inadequate performance management systems. We identified these problems mainly through the results of the 1992 and 1993 postal employee opinion surveys and our interviews with postal, union, and management association officials. Also, we did not suggest that such problems as the high level of grievance activity and poor relations between postal craft employees and supervisors were the result of union propaganda or internal union politics. Instead, as discussed in our 1994 report, we determined that various data, including (1) increased grievance rates, (2) repeated uses of arbitration to settle contract negotiations, and (3) responses to the 1992 and 1993 postal employee opinion surveys indicated that postal, union, and management association officials needed to change their relationships and work together to help improve the Service’s corporate culture, so that the Postal Service can become more competitive and a better place to work. 2. In its comments, APWU stated that it believed the report’s premise—that the Service has experienced lower productivity or insufficient productivity improvements compared to the private sector—was flawed. APWU also cited various problems with our discussion of TFP in the report and believed that we had implied that TFP was retarded by labor. In addition, APWU expressed concern about our characterization that the Service’s economic performance was causing it to lose market share to its competitors. Furthermore, APWU included in its comments specific data on such topics as (1) comparisons of Service and APWU labor productivity to that of the non-farm labor sector and (2) the Service’s share of the advertising revenue that has been generated by major communications participants, such as newspapers, radio, and television. The discussion on TFP in our report was intended to provide additional information and perspective on the Service’s overall productivity and performance compared to other performance indicators such as net income and delivery scores for specific classes of mail. We did not verify the accuracy of the TFP information that we obtained from the Service nor did we verify the data that APWU included with its comments related to such topics as labor productivity and advertising revenue. Also, we did not suggest as APWU stated that the behavior of TFP was retarded by labor. In addition, we stated in our report that the Service was concerned about the fact that customers were increasingly turning to competitors or alternative communications methods. This information was not our characterization, as asserted by APWU, but it was information that we obtained from Service officials. 3. In discussing the crew chief program, APWU commented that we ignored the fact that negotiated group leaders—employees whose responsibilities are similar to those of crew chiefs—were being successfully used at RECs, and the overall performance of the RECs has exceeded expectations. Our primary purpose for including RECs in our review was to determine the extent to which the joint APWU-Service labor-management cooperation memorandum had been implemented, not to review the overall operations of RECs. Thus, we did not review the use of negotiated group leaders at RECs or the overall performance of RECs. The following are GAO’s comments on specific issues included in the letter dated July 17, 1997, from the National Association of Letter Carriers (NALC). Other issues discussed in the letter have been included in the report text. 1. We do not agree with NALC’s opinion that our methodology in reviewing improvement initiatives was fundamentally flawed. The methodology we used for our 1994 report laid the groundwork for concluding that problems in labor-management relations persisted on the workroom floor of various postal facilities. The methodology that supported the work for this review involved a similar approach, which generally included (1) interviews with responsible postal, union, and management association officials both in headquarters and at selected postal field locations and (2) reviews of relevant documents. As discussed in the section of the report entitled “Objectives, Scope, and Methodology,” which begins on page 7, this work was intended to help us determine the extent to which progress in improving such problems had been made, including whether the results of specific improvement initiatives had contributed to such progress. As we mentioned in the methodology section, the 32 initiatives we originally identified for our review covered a wide range of postal improvement activities. We recognize that such initiatives offered opportunities for the Service and NALC, as well as the other three unions and the three management associations, to try to improve the postal work environment. However, we determined that because we were faced with a limited amount of time and resources, we were unable to review all 32 initiatives. We determined that our efforts could best be spent by reviewing those initiatives that we believed had significant potential to address the recommendations in the 1994 report, and that, of the 32 initiatives, 10 appeared to fit this criterion. As described in our methodology, our work included (1) discussions with various headquarters and field postal officials responsible for implementing and monitoring the 10 initiatives, (2) discussions with national and field union and management association representatives who were involved with or affected by the implementation of the 10 initiatives we reviewed, and (3) reviews of relevant documents associated with the implementation of the 10 initiatives. We believe that by using this approach, we were able to obtain sufficient information that enabled us to determine the overall extent to which progress had been made in improving various labor-management relations problems that were identified in our 1994 report. 2. In its comments, NALC stated that it believed it was inappropriate to compare the rural letter carrier system to the city carrier system. Thus, NALC believed that we should not cite the rural carrier system as a model for the Service and NALC to use in their attempts to revise the city letter carrier system. As discussed in our 1994 report, both the Service and NALC agreed that the city letter carrier system had problems and needed to be changed. We identified various positive attributes of the rural carrier system, such as greater independence for employees in sorting and delivering mail, that we believed the Service and NALC could consider in attempting to revise the city carrier system. However, we did not advocate that city carriers merely adopt the rural carrier system. Rather, we recommended that working together, the Service and NALC should test revised approaches that incorporate known positive attributes of the rural carrier system to determine how such attributes might be used in the city carrier system. We continue to believe that the implementation of this recommendation may help address some of the problems that we found were associated with the city letter carrier system. 3. In its comments, NALC expressed concern about the fact that we did not discuss two initiatives in our report. The two initiatives included (1) the 1992 Joint Statement on Violence and Behavior in the Work Place and (2) the Union-Management Pairs (UMPS) program. Concerning the joint statement on violence, NALC believed that it was curious that although this initiative was included in the original list of 32 initiatives, we did not include it in our report. Also, NALC stated that it believed the statement might have been “. . . an instructive area of inquiry, since it portrays the best and worst of union-management joint efforts to address labor-management cultural issues.” According to NALC, the signing of the statement by the Service and the unions was the best aspect of this initiative, but the worst part was the Service’s refusal to recognize the statement as an enforceable agreement against postal supervisors. As explained previously in comment 1, time and resource limitations prevented us from reviewing all 32 initiatives. We believed that the 10 initiatives we selected were those that had significant potential for addressing the recommendations included in our 1994 report. Since we did not review the joint statement on violence, we cannot comment on NALC’s statements about this initiative. However, we believe that such a statement provides the Service, its unions, and management associations an opportunity to work together to solve problems, which may help these organizations improve cooperation between employees and supervisors and reduce workfloor tensions. Concerning the Union-Management Pairs (UMPS) program, NALC stated that it was a joint, cooperative program, one in which postal management and union officials worked together to try to resolve disputes between employees and supervisors without lengthy delays or arbitration. NALC believed that UMPS was a successful program that helped bring about a drastic reduction in grievances and arbitrations and that in its 10 years of existence, it generated a positive labor-management ambiance. Although NALC stated that it wanted to expand the use of UMPS, the Service has refused to do so. Like the joint statement on violence, UMPS had been included in the original list of 32 initiatives, and, as mentioned previously, time and resource limitations precluded us from reviewing all 32 initiatives. However, as discussed in our 1994 report, UMPS provided the Service and NALC an opportunity to try to jointly resolve disputes between employees and supervisors before such disputes escalated into formal grievances. We believe that such an effort can help these organizations improve communications and reduce conflicts between employees and supervisors. The following is GAO’s comment on a specific issue included in the letter dated July 22, 1997, from the National Postal Mail Handlers Union (Mail Handlers). Other issues that were discussed in the letter have been included in the report text. 1. In its letter, the Mail Handlers union disagreed with our statement that about 80 percent of employees represented by the four major postal unions have joined and paid dues. According to Mail Handlers, this figure should be higher than 80 percent. Also, Mail Handlers mentioned in its letter the union security provisions of the National Labor Relations Act (NLRA) and its desire to see such provisions applied to the Postal Service, which, if enacted by Congress, would mean that postal employees represented by a labor organization must join and pay dues to that organization. According to PRA, employees have the right, but are not required, to join a labor organization. The overall percentage figure that we included in the report on the number of union members was intended to provide a general perspective on the extent to which those employees represented by unions were actual members of the union. We obtained information on the total number of employees represented by the four labor unions from the Postal Service’s On-Rolls and Paid Employees Statistics National Summary. Also, we recently contacted union officials in the four major postal labor unions to obtain estimated figures on employees who had joined the unions and paid dues. As shown in the report text on page 5, union officials estimated the following percentages of union members who had paid dues as of September 1996: 81 percent for APWU, 83 percent for Rural Carriers, 85 percent for Mail Handlers, and 92 percent for NALC. We did not verify the accuracy of the data in the Service’s summary nor did we verify the accuracy of the data provided by the four unions. In addition, since we did not address the union security provisions of NLRA as they might apply to the Postal Service, we could not comment on this issue. The following is GAO’s comment on a specific issue included in the letter dated June 11, 1997, from the National Rural Letter Carriers” Association (Rural Carriers). Other issues that were discussed in the letter have been included in the report text. 1. In its letter, the Rural Carriers union discussed its continued involvement in the Quality of Work Life/Employee Involvement (QWL/EI) initiative. Rural Carriers stated that this initiative has been ongoing since 1982 and QWL/EI participants have addressed various substantive work-related issues, such as the implementation and monitoring of automation, new rural carrier training and safety issues. Rural Carriers also mentioned that no permanent QWL/EI structure exists mainly because rural carriers who participate are not expected to devote their full time to QWL/EI activities and also, participants rotate through the QWL/EI program. The QWL/EI initiative was included in the original list of 32 initiatives that we had identified at the onset of our review. However, as discussed in the section of this report entitled “Objectives, Scope, and Methodology,” which begins on page 7, time and resource limitations precluded us from reviewing all 32 initiatives. Thus, from the list of 32 initiatives, we selected 10 that we determined had significant potential to address the recommendations in our 1994 report. Although we did not review the QWL/EI initiative in this report, as discussed in our September 1994 report, we found that when local postal management, unions, and employees were committed to improvement initiatives such as QWL/EI, the results were often positive and had the potential for helping to (1) develop mutual trust and cooperation, (2) change management styles, and (3) increase an awareness that quality of worklife is just as important as the “bottom line.” The following are GAO’s comments on specific issues included in the letter dated July 22, 1997, from the National League of Postmasters of the United States (the League). Other issues that were discussed in the letter have been included in the report text. 1. In its letter, the League commented on a statement we made in the report, which indicated that since 1970, the distinction between NAPUS and the League had become blurred and their memberships overlapped (i.e., many postmasters belonged to both organizations). According to the League, this statement was unclear. Thus, we revised the text to indicate that many postmasters belong to both NAPUS and the League and that both organizations address issues of interest to all postmasters. 2. In its letter, the League mentioned that it asked the Service to implement a specific project known as the Special Services Implementation Task Force. However, the League stated that the Service did not consult or work with the League during the planning stages of the project, and the League was consulted only near the end of the project. Also, the League mentioned that the Service asked the League to participate in the development of training courses. Although the results have not yet been determined, the League stated that the results of this work on training look promising. Since we did not review these initiatives, we cannot comment on the information that the League provided on them. 3. In its comments, the League mentioned the Management by Participation (MBP) initiative, which provided the Service and the three management associations an opportunity to help eliminate authoritarian management styles. The League indicated that although MBP was viewed as a worthwhile initiative and helped make various improvements, it was discontinued during or shortly after the PMG’s 1992 postal reorganization. At the beginning of our work, MBP was included in the list of 32 initiatives. However, as discussed in the section entitled “Objectives, Scope, and Methodology,” which begins on page 7, time and resource limitations precluded us from reviewing all 32 initiatives. Thus, we focused our efforts on 10 initiatives that we determined had significant potential for addressing our 1994 recommendations. Since we did not review MBP in this report, we cannot comment on the information that the League provided on MBP. However, in chapter 6 and appendix II of volume II of our 1994 report, we included information on MBP, which was a process for disseminating participative management concepts to postal supervisors, managers, and postmasters so that a more participative work environment could be fostered and realistic solutions to business problems could be developed. 4. In its letter, the League commented on the new compensation system for managers and supervisors, including the EVA program. The League stated that our report implied that most postmasters were included in EVA; but, according to the League, most postmasters were excluded from EVA. In our report, we stated that League and NAPUS officials told us that based on the Service’s decision that nonexempt employees should not be eligible to receive EVA bonuses, about 60 percent of employees represented by these associations were eliminated because they were nonexempt employees. We believe that by including this statement in the report, we had already indicated the League’s concern that a majority of the employees it represented was excluded from EVA. The League also commented that it refused to endorse the new pay system because it excluded most of the Service’s postmasters, including most of the League’s members. As suggested by the League, we included this information in the text of the report where the new compensation system was discussed. 5. In its letter, the League suggested that separate meetings between each of the seven employee organizations and the Postal Service might help develop cooperation and trust between the parties. According to the League, after such meetings had taken place, all eight parties could come together for what would hopefully prove to be a more productive and successful meeting. As discussed in this report, in November 1994, the PMG invited the four labor unions and the three management associations to meet with the Service in trying to determine, among other things, how best to implement the recommendations included in our September 1994 report. A key recommendation in our report was the establishment by these eight parties of a framework agreement to outline overall objectives and approaches for demonstrating improvements in the workroom climate of both mail processing and delivery functions. However, we did not specify the means by which the eight organizations should establish such an agreement. Robert E. Kigerl, Evaluator Robert W. Stewart, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Postal Service's (USPS) efforts to improve employee working conditions and the overall performance of the Service, focusing on: (1) the status and results of the Postal Service's efforts in improving various labor-management relations problems identified in GAO's 1994 report, including how USPS implemented specific improvement initiatives; and (2) approaches that could help USPS and its four labor unions and three management associations achieve consensus on how to deal with the problems GAO discussed in its 1994 report. GAO noted that: (1) little progress has been made in improving the persistent labor-management relations problems that had, in many instances, resulted from autocratic management styles, the sometimes adversarial attitudes of employees, unions, and management, and an inappropriate and inadequate performance management system; (2) these problems have generally contributed to a sometimes contentious work environment and lower productivity for USPS; (3) also, the number of employee grievances not settled at the first 2 steps of the grievance process has increased from around 65,000 in fiscal year (FY) 1994 to almost 90,000 in FY 1996; (4) these problems continue to plague USPS in part because the parties involved, including USPS, the four major labor unions, and the three management associations, cannot agree on common approaches for addressing the problems; (5) this inability to reach agreement has prevented USPS and the other seven organizations from implementing GAO's recommendation to develop a framework agreement that would outline common objectives and strategies for addressing labor-management relations problems and improving the postal workroom climate; (6) since 1994, USPS and its unions and management associations have tried to improve the climate of the postal workplace by implementing specific improvement initiatives; (7) many postal, union, and management association officials told GAO that they believed some of these initiatives held promise for making a positive difference in the labor-management climate; (8) however, GAO's review of specific improvement initiative showed that although some actions had been taken to implement certain initiatives, little information was available to measure their results; (9) in some instances, the initiatives were only recently piloted or implemented, and some had been discontinued; (10) in other instances, although postal and union officials agreed that improvements were needed, they disagreed on approaches for implementing specific initiatives; (11) generally, these disagreements have made it difficult for USPS and its unions and management associations to move forward and work together to ensure that the initiatives' intended improvements could be achieved; and (12) with the significant future challenges it faces to compete in a fast-moving communications marketplace, USPS can ill afford to be burdened with long-standing labor-management relations problems.
We defined the financial services industry to include the following sectors: depository credit institutions, which include commercial banks, thrifts (savings and loan associations and savings banks), and credit unions; holdings and trusts, which include investment trusts, investment companies, and holding companies; nondepository credit institutions, which extend credit in the form of loans, include federally sponsored credit agencies, personal credit institutions, and mortgage bankers and brokers; the securities sector, which is made up of a variety of firms and organizations (e.g., broker-dealers) that bring together buyers and sellers of securities and commodities, manage investments, and offer financial advice; and the insurance sector, including carriers and insurance agents, which provides protection against financial risks to policyholders in exchange for the payment of premiums. Additionally, the financial services industry is a major source of employment in the United States. According to the EEO-1 data, the financial services firms we reviewed for this testimony, which have 100 or more staff, employed nearly 3 million people in 2004. Moreover, according to the U.S. Bureau of Labor Statistics, employment in management and professional positions in the financial services industry was expected to grow at a rate of 1.2 percent annually through 2012. Finally, a recent U.S. Census Bureau report based on data from the 2002 Economic Census stated that, between 1997 and 2002, Hispanics in the United States opened new businesses at a rate three times faster than the national average. Overall EEO-1 data do not show substantial changes in diversity at the management level and suggest that certain financial sectors are more diverse at this level than others. Figure 1 shows that overall management- level representation by minorities increased from 11.1 percent to 15.5 percent from 1993 through 2004. Specifically, African-Americans increased their representation from 5.6 percent to 6.6 percent, Asians from 2.5 percent to 4.5 percent, Hispanics from 2.8 percent to 4.0 percent and American Indians from 0.2 to 0.3 percent. Management-level representation by white women was largely unchanged at slightly more than one-third during the period, while representation by white men declined from 52.2 percent to 47.2 percent. EEO-1 data may actually overstate representation levels for minorities and white women in the most senior-level positions, such as Chief Executive Officers of large investment firms or commercial banks, because the category that captures these positions—”officials and managers”—covers all management positions. Thus, this category includes lower level positions (e.g., assistant manager of a small bank branch) that may have a higher representation of minorities and women. In 2007, EEOC plans to use a revised form for employers that divides this category into “executive/senior-level officers and managers” and “first/mid-level officials,” which could provide a more accurate picture of diversity among senior managers. As shown in figure 2, EEO-1 data also show that the depository and nondepository credit sectors, as well as the insurance sector, were somewhat more diverse at the management level than the securities and holdings and trust sectors. In 2004, minorities held 19.9 percent of management-level positions in nondepository credit institutions, such as mortgage bankers and brokers, but 12.4 percent in holdings and trusts, such as investment companies. You also asked that we collect data on the accounting industry. According to the 2004 EEO-1 data, minorities held 13.5 percent, and white women held 32.4 percent of all “officials and managers” positions in the accounting industry. Minorities’ rapid growth as a percentage of the overall U.S. population and increased global competition have convinced some financial services firms that workforce diversity is a critical business strategy. Officials from the firms with whom we spoke said that their top leadership was committed to implementing workforce diversity initiatives, but noted that they faced challenges in making such initiatives work. In particular, they cited ongoing difficulties in recruiting and retaining minority candidates and in gaining employees’ “buy-in” for diversity initiatives, especially at the middle management level. Since the mid-1990s, some financial services firms have implemented a variety of initiatives designed to recruit and retain minority and women candidates to fill key positions. Officials from several banks said that they had developed scholarship and internship programs to encourage minority students to consider careers in banking. Some firms and trade organizations have also developed partnerships with groups that represent minority professionals and with local communities to recruit candidates through events such as conferences and career fairs. To help retain minorities and women, firms have established employee networks, mentoring programs, diversity training, and leadership and career development programs. Officials from some financial services firms we contacted, as well as industry studies, noted that that financial services firms’ senior managers were involved in diversity initiatives. For example, according to an official from an investment bank, the head of the firm meets with every minority and female senior executive to discuss his or her career development. Officials from a few commercial banks said that the banks had established diversity “councils” of senior leaders to set the vision, strategy, and direction of diversity initiatives. A 2005 industry trade group study and some officials also noted that some companies were linking managers’ compensation with their progress in hiring, promoting, and retaining minority and women employees. A few firms have also developed performance indicators to measure progress in achieving diversity goals. These indicators include workforce representation, turnover, promotion of minority and women employees, and employee satisfaction survey responses. Officials from several financial services firms stated that measuring the results of diversity efforts over time was critical to the credibility of the initiatives and to justifying the investment in the resources such initiatives demanded. The financial services firms and trade organizations we contacted that had launched diversity initiatives cited a variety of challenges that may have limited the success of their efforts. First, officials said that the industry faced ongoing challenges in recruiting minority and women candidates. According to industry officials, the industry lacks a critical mass of minority employees, especially at the senior levels, to serve as role models to attract and retain other minorities. Available data on minority students enrolled in Master of Business Administration (MBA) programs suggest that the pool of minorities, a source that may feed the “pipeline” for management-level positions within the financial services industry and other industries, is relatively small. In 2000, minorities accounted for 19 percent of all students enrolled in MBA programs in accredited U.S. schools; in 2004, that student population had risen to 23 percent. Financial services firms compete for this relatively small pool not only with one another but also with firms from other industries. Evidence suggests, however, that the financial services industry may not be fully leveraging its “internal” pipeline of minority and women employees for management-level positions. As shown in figure 3, there are job categories within the financial services industry that generally have more overall workforce diversity than the “official and managers” category, particularly among minorities. For example, minorities held 22 percent of “professional” positions in the industry in 2004 as compared with 15 percent of “officials and managers” positions. According to a recent EEOC report, the professional category represented a possible pipeline of available management-level candidates. The EEOC states that the chances of minorities and women (white and minority combined) advancing from the professional category into management-level positions is lower when compared with white males. Many officials from financial services firms and industry trade groups agreed that retaining minority and women employees represented one of the biggest challenges to promoting workforce diversity. One reason they cited is that the industry, as described previously, lacks a critical mass of minority men and women, particularly in senior-level positions, to serve as role models. Without a critical mass, the officials said that minority or women employees may lack the personal connections and access to informal networks that are often necessary to navigate an organization’s culture and advance their careers. For example, an official from a commercial bank we contacted said he learned from staff interviews that African-Americans believed that they were not considered for promotion as often as others partly because they were excluded from informal employee networks needed for promotion or to promote advancement. In addition, some industry officials said that achieving “buy-in” from key employees such as middle managers could be challenging. Middle managers are particularly important to diversify institutions because they are often responsible for implementing key aspects of diversity initiatives and for explaining them to other employees. However, the officials said that middle managers may be focused on other aspects of their responsibilities, such as meeting financial performance targets, rather than the importance of implementing the organization’s diversity initiatives. Additionally, the officials said that implementing diversity initiatives represents a considerable cultural and organizational change for many middle managers and employees at all levels. An official from an investment bank told us that the bank has been reaching out to middle managers who oversee minority and women employees by, for example, instituting an “inclusive manager program.” Studies and reports, as well as interviews we conducted, suggest that minority- and women-owned businesses face challenges obtaining bank credit in conventional financial markets for several reasons, including business characteristics (e.g., small firm size) and the possibility that lenders may discriminate. Some business characteristics may also limit the ability of minority- and women-owned businesses to raise equity capital. However, some financial institutions, primarily commercial banks, have recently begun marketing their loan products and offering technical assistance to minority- and women-owned businesses. Reports and other research, as well as interviews we conducted with commercial banks, including minority-owned banks and trade groups representing minority- and women-owned businesses, highlight some of the challenges these businesses may face in obtaining commercial bank credit. For example, many minority-owned businesses are in the retail and service sectors and may have few assets to offer as collateral. Further, many of these businesses are relatively young and may not have an established credit history. Many also are relatively small and often lack technical expertise. On the other hand, some studies suggest that lenders may discriminate against minority-owned businesses. We reviewed one study that found given comparable loan applications—by African-American and Hispanic- owned firms and white-owned firms—the applications by the African- American and Hispanic-owned firms were more likely to be denied. However, assessing such alleged discrimination may be complicated by limitations in data availability. The Federal Reserve’s Regulation B, which implements the Equal Credit Opportunity Act, prohibits financial institutions from requiring information on race and gender from applicants for nonmortgage credit products. Although the regulation was initially implemented to prevent such information from being used to discriminate against certain groups, some federal financial regulators have stated that removing the prohibition would allow them to better monitor and enforce laws prohibiting discrimination in lending. Likewise, at least one bank official noted that Regulation B limited the bank’s ability to measure the success of its efforts to provide financial services to minority groups. We note that under the Home Mortgage Disclosure Act (HMDA), lenders are required to collect and report data on racial and gender characteristics of applicants for mortgage loans. Researchers have used the HMDA data to assess potential mortgage lending discrimination by financial institutions. Research also suggests that some business characteristics (e.g., limited technical expertise) that may affect the ability of many minority- and women-owned businesses to obtain bank credit, as discussed earlier, may also limit their capacity to raise equity capital. Although venture capital firms may not have traditionally invested in minority-owned businesses, a recent study suggests that firms that do focus on such entities can earn rates of return comparable to those earned on mainstream private equity investments. Officials from some financial institutions we contacted, primarily large commercial banks, told us that they are reaching out to minority- and women-owned businesses by marketing their financial products to them (including in different languages), establishing partnerships with relevant trade and community organizations, and providing technical assistance. For example, officials from some banks said that they educate potential business clients by providing technical assistance through financial workshops and seminars on various issues, such as developing a business plans and obtaining commercial bank loans. While these efforts take time and resources, the officials we spoke with indicated that their institutions recognize the benefits of tapping this growing segment of the market. Madam Chairwoman, this concludes my prepared statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have. For further information about this testimony, please contact Orice M. Williams on (202) 512-8678 or at williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Wesley M. Phillips, Assistant Director; Emily Chalmers; William Chatlos; Kimberly Cutright; Simin Ho; Marc Molino; Robert Pollard; LaSonya Roberts; and Bethany Widick. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
A July 2004 congressional hearing raised concerns about the lack of diversity in the financial services industry, particularly in key management positions. Some witnesses noted that these firms (e.g., banks and securities firms) had not made sufficient progress in recruiting minorities and women at the management level. Others raised concerns about the ability of minority-owned businesses to raise debt and equity capital. At the request of the House Financial Services Committee, GAO was asked to provide a report on overall trends in management-level diversity and diversity initiatives from 1993 through 2004. This testimony discusses that report and focuses on (1) what the available data show about diversity at the management level, (2) the types of initiatives that the financial services industry has taken to promote workforce diversity and the challenges involved, and (3) the ability of minority- and women-owned businesses to obtain capital and initiatives financial institutions have taken to make capital available to these businesses. For our analysis, we analyzed data from the Equal Employment Opportunity Commission (EEOC); reviewed select studies; and interviewed officials from financial services firms, trade organizations, and federal agencies. GAO makes no recommendations at this time. From 1993 through 2004, overall diversity at the management level in the financial services industry did not change substantially, but some racial/ethnic minority groups experienced more change in representation than others. EEOC data show that management-level representation by minority women and men overall increased from 11.1 percent to 15.5 percent. Specifically, African-Americans increased their representation from 5.6 percent to 6.6 percent, Asians from 2.5 percent to 4.5 percent, Hispanics from 2.8 percent to 4.0 percent, and American Indians from 0.2 percent to 0.3 percent. Financial services firms and trade groups have initiated programs to increase workforce diversity, but these initiatives face challenges. The programs include developing scholarships and internships, partnering with groups that represent minority professionals, and linking managers' compensation with their performance in promoting a diverse workforce. Some firms have developed indicators to measure progress in achieving workforce diversity. Industry officials said that among the challenges these initiatives face are recruiting and retaining minority candidates, as well as gaining the "buy-in" of key employees, such as the middle managers who are often responsible for implementing such programs. Research reports suggest that minority- and women-owned businesses have difficulty obtaining access to capital for several reasons, such as that these businesses may be concentrated in service industries and lack assets to pledge as collateral. Some studies suggest that lenders may discriminate, but proving such an allegation is complicated by the lack of available data. However, some financial institutions, primarily commercial banks, said that they have developed strategies to serve minority- and women-owned businesses. These strategies include marketing existing financial products specifically to minority and women business owners.
The United States is currently undergoing a transition from analog to digital broadcast television. With traditional analog technology, pictures and sounds are converted into “waveform” electrical signals for transmission through the radiofrequency spectrum, while digital technology converts these pictures and sounds into a stream of digits consisting of zeros and ones for transmission. Digital transmission of television signals provides several advantages compared to analog transmission, such as enabling better quality picture and sound reception as well as using the radiofrequency spectrum more efficiently than analog transmission. A primary goal of the DTV transition is for the federal government to reclaim spectrum that broadcasters currently use to provide analog television signals. The radiofrequency spectrum is a medium that enables many forms of wireless communications, such as mobile telephone, paging, broadcast television and radio, private radio systems, and satellite services. Because of the virtual explosion of wireless applications in recent years, there is considerable concern that future spectrum needs— both for commercial as well as for varied government purposes—will not be met. The spectrum that will be cleared at the end of the DTV transition is considered highly valuable spectrum—sometimes called “beachfront spectrum”—because of its particular technical properties. In all, the DTV transition will clear 108 MHz of spectrum—a fairly significant amount. In the Balanced Budget Act of 1997, the Congress directed FCC to reallocate 24 MHz of the reclaimed spectrum to public safety uses. Since the terrorist attacks of September 11, 2001, there has been a greater sense of urgency to free spectrum for public safety purposes. The remaining returned spectrum will be auctioned for use in advanced wireless services, such as wireless high-speed Internet access. To implement the DTV transition, television stations must provide a digital signal, which requires them to upgrade their transmission facilities, such as transmission lines, antennas, and digital transmitters and encoders. Depending on each individual station’s tower configuration, the digital conversion may require new towers or upgrades to existing towers. Most television stations throughout the country are now providing a digital broadcast signal in addition to their analog signal. After 2006, the transition will end in each market—that is, analog broadcast signals will no longer be provided—when at least 85 percent of households in a given market have the ability to receive digital broadcast signals. During the course of our review, we identified several administrative challenges to implementing a subsidy for DTV equipment. For example, prior to implementing a subsidy program, various determinations need to be made, including (1) which federal entity will administer a subsidy program, (2) whether a rulemaking process is necessary to fully determine and stipulate how the subsidy program will be structured, (3) who will be eligible to receive a subsidy, (4) what equipment will be covered, (5) how information about the subsidy will be communicated to consumers and industry, and (6) what measures, if any, will be taken to limit fraud. One challenge to the DTV subsidy that we identified is determining which entity should administer the subsidy program. An industry representative told us that the implementing agency should have some level of telecommunications expertise in order to be able to set appropriate standards for the equipment being subsidized and to effectively educate consumers about the DTV transition. In our opinion, policymakers might also consider if the entity has experience administering a household assistance program. Based on our discussions with government officials, it appears that no single entity has the combined technical knowledge and subsidy administration expertise that might be necessary to successfully implement a DTV subsidy. For example, while FCC and NTIA have telecommunications knowledge and are responsible for managing the use of the radiofrequency spectrum, neither has experience administering a federal subsidy program of this kind. We asked these agencies about their ability, based on their experience, to administer a DTV subsidy. NTIA had no official comment. FCC officials told us they believe the Commission could have some role, such as defining which equipment would be eligible for the subsidy, but did not believe FCC was best suited to administer the entire subsidy program. Further, an FCC official said it might be advantageous for the administering entity to leverage the expertise of state government agencies to assist with delivering the subsidy to low-income households. We also asked two agencies that have experience administering federal assistance programs, the Department of Health and Human Services and the Department of Agriculture’s Food and Nutrition Service, about their ability to implement a DTV subsidy. Although these agencies have experience with subsidy programs, they do not have expertise in telecommunications. Officials from the Department of Health and Human Services told us the agency would not be well suited to administer a DTV subsidy because their programs, such as Temporary Assistance for Needy Families, are narrowly defined—a household must have children to be eligible for Temporary Assistance for Needy Families—and would not offer broad enough coverage for a DTV subsidy. Similarly, officials from the Food and Nutrition Service said they did not believe their agency would be the best entity to administer the subsidy. However, after we asked whether the state agencies that administer food stamps could provide a DTV subsidy to their recipients, Food and Nutrition Service officials said that this might be possible under certain conditions, but that an agreement would most likely have to be reached with each state and, in their view, the states should be paid for the costs they incur in doing so. When we contacted four state heath and human services agencies that administer various assistance programs on behalf of the federal government, such as food stamps, all four indicated that it might be possible for the states to provide the DTV subsidy to the low-income individuals who already receive assistance from one or more programs they administer. However, they told us there would be costs associated with implementing a subsidy program, such as staff time, programming costs, postage, and envelopes. One state we contacted estimated that it would cost approximately $552,000 to mail vouchers to the approximately 1.5 million households that receive food stamps, Medicaid, and Temporary Assistance for Needy Families within the state. However, two states told us that if the program ran over a period of time it would be difficult to track which households already received the DTV subsidy as people go on and off of assistance over time, so some households could receive duplicate benefits. Further, three of the four states told us that such a program would be burdensome on their limited staff resources. A rulemaking process might be required to implement a DTV subsidy, and if so, this would likely have implications for how quickly a subsidy program could be established. While legislation could broadly define the parameters of the subsidy program and may even prescribe specific elements of the programs’ structure and administration, it is not uncommon for a federal agency to determine that a rulemaking process is necessary to more fully detail how a program will be implemented. Through a rulemaking, the agency would finalize the rules of the program that were not specifically addressed in the legislation. FCC told us that if the legislation is very specific a rulemaking process may not be necessary for a DTV subsidy. However, FCC did note that rulemakings have been used in the past after legislation enacted new programs. For example, rulemaking processes have been undertaken several times to make adjustments to the Lifeline Assistance Program since it was established in 1985. The rulemaking process generally takes time because it requires a wide range of procedural, consultative, and analytical actions on the part the agencies. Sometimes agencies take years to develop final rules. Among other things, the rulemaking process generally requires agencies to (1) publish a notice of proposed rulemaking in the Federal Register; (2) allow interested parties an opportunity to participate in the rulemaking process by providing written data, views, or arguments; (3) review the comments received and make any changes to the rule that it believes are necessary to respond to those comments; and (4) publish the final rule at least 30 days before it becomes effective. Further, the Office of Management and Budget reviews significant proposed and final rules initiated by executive branch agencies other than independent regulatory agencies before those rules are published in the Federal Register. A former official from the Department of Health and Human Services told us that industry participants, interest groups, or other stakeholders can challenge a proposed rulemaking, which can delay the process further. He said that in order to avoid such challenges, it is essential to have the key stakeholders involved early in the process. That is, if the key stakeholders have the opportunity to provide input prior to the development of the rulemaking and are satisfied that their concerns are addressed, they will be less likely to file a challenge to the proposed rulemaking. Determining who would be eligible to receive the subsidy could present an administrative challenge to developing a subsidy program. If the government decides not to provide a DTV subsidy to all households, it would need to establish criteria to determine who is eligible. For example, a means test could be imposed to restrict eligibility to low-income households determined to be in financial need of the subsidy. The subsidy could also be limited to only those households relying on over-the-air television signals, on the grounds that these households are likely to be the most adversely affected by the DTV transition. Eligibility for Low-Income Households: If it is determined that a DTV subsidy will only be made available to low-income households, a means test of some kind would need to be used to identify the appropriate target households. Officials from the Department of Health and Human Services told us that using the income-based eligibility criteria of existing social service programs to define eligibility for a DTV subsidy program would be the most efficient way to employ a means test. That is, by using the receipt of an existing program benefit that is means tested, a new program could be effectively implemented without developing a means test specifically for that program. However, we were also told that one of the drawbacks to using these existing programs is that not all who are eligible for any particular program actually choose to apply for and receive benefits. This would mean that by only providing a DTV subsidy to those already receiving other assistance, some people who would be eligible for the subsidy based on their underlying income would not qualify for the subsidy because they have chosen not to receive another form of assistance. Officials from the Food and Nutrition Service told us that for the Food Stamp Program, approximately 54 percent of those who would be eligible for the program receive the benefit nationwide. It was thus suggested to us that if recipient lists from social assistance programs were used in developing eligibility determinations for a DTV subsidy, it might be beneficial to use more than one program. By combining the participants of several programs, a DTV subsidy for low-income households would target a higher percentage of needy households than if only one program was used to establish eligibility. For example, FCC told us that the Lifeline Assistance Program uses receipt of any of seven social assistance programs, including food stamps and Medicaid, as an eligibility requirement. Privacy concerns could, however, be a limitation of using existing social welfare programs to develop eligibility for a DTV subsidy because the agencies administering these programs may be prohibited from providing the list of recipients to any outside entity. Under current law for example, food stamp recipient information might not be available to other federal agencies or to any private party or outside entity that might be involved in the administering the subsidy. Another limitation in using these data is that there is continuous change in recipient rolls because of people entering and leaving the program. Those implementing a DTV subsidy program would need to take into account the volatility of recipient rolls in deciding how this information could be used. Eligibility for Over-the-Air Households: Some stakeholders we contacted indicated that a DTV subsidy should be focused on or limited to only those households that rely exclusively on over-the-air television. Because no list of these households exists, limiting a subsidy in this manner will require determining who the over-the-air households are—a task that could pose administrative challenges. One possible approach to identifying over-the- air households is to first identify cable and satellite subscribers. A combined list of all cable and satellite subscribers could be used as a mechanism to check whether those applying for a DTV subsidy are not qualified for the subsidy. The process of combining cable and satellite subscriber information into a comprehensive list could be a highly challenging task. First, cable industry officials we interviewed expressed concern over providing their subscriber lists to a government agency or another entity. Cable officials told us that under current law, they could not turn over subscriber information to the government without prior permission from subscribers unless they were under a court order. Cable industry officials also told us that any change in current legislation would need to include liability protection for cable and satellite companies because their subscriber lists—which include personal information provided to these companies from subscribers—would be outside their control. An industry official said that even more stringent safeguards would need to be in place if the information were provided to an outside entity—such as a contractor— rather than to a government agency. One cable company official stated that even if the law were changed to allow the company to provide its subscriber lists, it would be placed in the awkward situation of having to inform their subscribers that their names were provided to the government to help administer a subsidy that the cable subscribers are not eligible to receive. The cable company official also stated that subscribers would be sensitive to their information being used in this manner, especially in light of recent security issues related to personal information. A second challenge to developing a national list of all cable and satellite subscribers is the difficulty of merging this information across all cable and satellite companies. Currently, there are over 1,100 cable and satellite companies operating throughout the country, with a total of nearly 90 million subscribers. Information from these companies, which is maintained in various formats, would have to be collected and combined into a comprehensive list of subscribers. Cable industry officials stated that the process of merging and maintaining a list of nearly 90 million subscribers would not be an easy undertaking. For example, one cable industry official estimated that the process of working through all the technical logistics for establishing a list could take 6 to 12 months. Additionally, cable industry officials stated that there is significant “churn” (i.e., the number of people moving on and off subscriber lists) in the industry. For example, one cable company official stated that churn can be as high as 10 percent of subscribers from month to month. Another cable industry official told us that a significant level of resources would be needed to keep such a combined subscriber list up to date. Another possible, albeit difficult, way to determine who the over-the-air households are would be to send queries to cable and satellite providers to ask if particular people who have applied for the DTV subsidy are, in fact, already subscribing to cable or satellite. For cable customers, a database would need to be developed to direct the queries to the applicable provider. According to FCC, the Commission maintains a master data base with information on all franchised cable areas—of which there are over 30,000. The most identifiable geographic information in that database is the name of county where each cable franchise is located. If an applicant for the DTV subsidy provided a county of residence, a query could be sent to all the franchised cable areas in that county. However, an FCC official told us that in many counties there are multiple cable franchises operating. Moreover, the FCC official stated that even though there is a contact name for each franchise area, in many cases, the contact was someone at a corporate headquarters of the cable company. Thus, we believe that to contact the local cable franchise directly, the database would need to be further developed to include information—perhaps an e-mail address at the local franchise level—to which the query could be sent. This process could be time consuming for both the entity processing the subsidy applications and the cable providers. On the satellite side, we believe querying the satellite providers might not be too difficult because there are only two primary providers. However, people may object to their personal information being sent to the satellite providers as well as the cable providers in their area. Another option might be to use information maintained by companies that perform subscriber billing for cable and satellite companies. We were told that about six large billing companies provide billing services for a substantial majority of the cable and satellite companies. Representatives from a company that provides identification and credential verification services told us they could verify that individuals applying for a DTV subsidy do not subscribe to a cable or satellite service by checking the applicant’s address against the addresses maintained by the cable and satellite providers’ billing companies. To protect the privacy of subsidy applicants, the identification and verification services company told us such queries should be based on an individual’s address rather than name or Social Security number. Company officials also told us that it would likely take a few months to develop this checking process. One of the administrative elements of a subsidy program that would likely need to be determined is exactly what equipment will be subsidized. In making this determination, policymakers might consider both policy issues as well as issues related to the ability of the program to be implemented and managed. From a policy perspective, several of the manufacturers and retailers we contacted told us that they believe it would be most beneficial to consumers if the program did not put highly specific limits on the type of equipment they could buy with the subsidy. In particular, some stakeholders generally believed that eligible consumers should not only be allowed to apply the subsidy toward a basic set-top box, but should also be allowed to apply that amount toward enhanced set-top boxes (those with upgraded features or functions) or digital televisions capable of receiving and displaying digital broadcast signals. Several stakeholders noted that any product that enables consumers to receive digital broadcast signals does the job of ensuring that there is no loss in television service when the transition occurs. Moreover, some said a wide application of the subsidy provides consumers the most choice and promotes the adoption of digital television. An opposing view is that a subsidy should only be designed to ensure that there is no loss of television service when the DTV transition is completed, and therefore the subsidy should only be applicable to a set-top box. From the perspective of administering the program, determining what items the subsidy can be applied towards is critical for communicating to manufacturers, retailers, and consumers a key parameter of the program. Some stakeholders noted that either the Congress or the administering agency would need to identify the products that would be subsidized so that manufacturers produce the appropriate equipment. If the intent is to subsidize only simple set-top boxes, FCC officials told us that the subsidy would cover boxes that have only analog outputs. If the Congress or the implementing agency determines that the subsidy will be more broadly applicable, the particular parameters of the program would need to be communicated to the manufacturing industry so that their business plans can proceed. There would also likely be some process by which specific items meeting the parameters of the subsidy program are approved and flagged as eligible for the subsidy. Manufacturers need certainty about what items are approved for the subsidy if they are to place a rebate coupon on or inside of the equipment boxes, along with any related information. Specific identification of subsidized items will also be important for retailers as they make inventory decisions and train staff about how to guide consumers’ purchasing decisions. Also, if retailers are asked to play a part in the administration of the program, such as by accepting vouchers or printing rebate coupons at the time of sale, it will be critical for them to have validation of items that are eligible for the subsidy. And, clearly, consumers need to understand which items they can purchase using the subsidy. Some industry representatives we contacted also expressed concern about the interface between industry and the government in the design of the subsidy program. In particular, industry representatives said that the government should work with industry as the subsidy program is developed to ensure that the program is designed in a manner that will provide incentives for manufacturers and retailers to participate. Additionally, some companies noted that the government would need to provide industry with information on the expected scope of the program in order to avoid shortages of equipment at retail. In general, some companies told us that industry should be involved in the development of the program to help ensure that it is designed and implemented efficiently. To successfully implement a DTV subsidy program, eligible recipients will need to understand that a subsidy is available, how to obtain it, which equipment the subsidy can be used for, and where they can obtain the equipment. Thus the agency responsible for implementing the program would need to undertake a communication campaign. At the same time, it could be difficult to provide information about the parameters of the subsidy program if there is not a general understanding about the broader DTV transition. As such, it appears that an information campaign regarding the availability of a subsidy for DTV equipment might need to be coordinated with a more general information campaign about the transition and its ramifications for American households. Three years ago we found that many Americans did not have significant awareness of the DTV transition, and we recommended that FCC explore options to raise public awareness about the transition and the impact it will have on consumers. Since that time, FCC and industry have undertaken efforts to better inform the public about the transition. In March of this year, the Consumer Electronics Association, an association of electronics manufacturers, reported that consumers’ understanding of digital television has improved. This association surveyed individuals and found that, compared to past years, there has been an increase in consumer familiarity and understanding of DTV, as well as an increase in the likelihood of over-the-air households to take action to avoid losing television service. Based on our interviews with several stakeholders, it appears that despite these findings many consumers—particularly those who may be the most affected by the transition—may still be unaware or confused about the DTV transition. Several of the company representatives with whom we spoke told us that while consumers are more familiar with the concept of high-definition television, they are still unaware or confused about other aspects of the DTV transition. Some told us that few consumers understand that at some point analog television will cease operation and analog television sets will be unable to receive digital over-the-air signals. We were told that it is especially difficult to provide consumers with a better understanding of this in the absence of a hard transition date. Additionally, some populations might be difficult to reach because English may not be their primary language or because they only receive television over-the-air and have no business relationship with a subscription television provider that would likely provide them with information about the transition. Depending on how a subsidy program is structured and implemented, there may be opportunities for people to defraud the government. For example, one official familiar with government subsidy programs noted that if everyone were eligible for the subsidy, the opportunities for fraud would decline. For this reason, the more restrictive the eligibility requirements, the greater may be the chances for fraud. In terms of reducing fraud, those familiar with rebates noted that the more requirements for rebate redemption—that is, the more documentation the consumer must provide to redeem the rebate—the fewer problems with fraud there are likely to be. However, we were also told that increased requirements would tend to reduce the number of people who attempt to redeem the rebate. An additional consideration regarding fraud is the cost of fraud mitigation. A former official from the Department of Health and Human Services told us that while minimizing fraud should be considered in developing a subsidy program, the cost-effectiveness of these efforts should also be measured. For example, we were told that administering systems to mitigate and prevent fraud may be costly and may not be worthwhile, especially if the value of the subsidy is low. While a government subsidy for consumers to purchase DTV equipment could be administered in several ways, each of the subsidy options we examined had advantages and disadvantages. Following is a description of and stakeholders’ views on four DTV subsidy options: a refundable tax credit, government distribution of equipment, a voucher program, and a rebate program. As we noted above, we take no position on whether a subsidy should be implemented, or whether, if a subsidy program is established, it should be implemented in any particular way. Refundable Tax Credit Program: One method that could be used to administer a subsidy program for DTV equipment would be a refundable tax credit, administered as part of the federal individual income tax. A refundable tax credit could be designed to provide qualifying taxpayers a refund greater than the amount of their tax liability before credits. Based on the manner in which tax credits work, we believe that a tax credit for DTV equipment would likely be structured such that consumers purchase an eligible set-top box, maintain required information on their purchase, and seek reimbursement for all or some portion of the cost from the federal government for the equipment when they file their federal income taxes. Based on discussions with an official from the Department of the Treasury, it does not appear that this method would be well suited for a DTV subsidy. The Treasury official told us that considerable administrative burdens would be imposed on the Internal Revenue Service (IRS) to administer a refundable tax credit for a one-time subsidy. This official noted that implementation of a new tax credit would require the IRS to change tax forms, as well as instructions, for the years that the program would be in operation. Changing tax forms imposes administrative costs, particularly if tax laws are changed after forms have been developed for a given tax year. Additionally, he noted that IRS Form 1040 is currently completely full, so that any new credit could require the form to be lengthened from two pages to three pages, which would be costly and burdensome. The official also noted that the availability of the tax credit may cause some individuals who otherwise would not file a tax form to do so, which would increase IRS administrative burdens. The Treasury official also noted that there could be compliance problems with a tax credit approach. Because of the small amount of the credit—likely about $50—it would not be cost-effective for the IRS to assign resources to check compliance, thus it would be very difficult to minimize fraudulent use of the credit. In fact, IRS has had difficulty assuring compliance for a refundable tax credit. In particular, for the Earned Income Tax Credit, IRS estimated that roughly 30 percent of the dollars claimed was erroneous. We heard from stakeholders that a tax credit for DTV equipment might not be the most helpful to low-income Americans because individuals would have to purchase the equipment with their own money and file—possibly many months later—for a tax refund. Also, we were told some low-income Americans do not file tax returns. We believe the additional costs and burdens for such individuals to file taxes for the purpose of obtaining a tax credit may exceed the value of the credit. Government Distribution: With government distribution, the government provides certain goods for needy citizens. One example of government distribution is the Emergency Food Assistance Program whereby the government provides food, such as dried fruit, non-fat dry milk, and peanut butter, to states for distribution to selected local agencies—usually food banks—which, in turn, distribute the food to soup kitchens and food pantries that serve the public directly. For the DTV transition, the government could directly provide the necessary equipment to individuals, but we found there would be a number of challenges to implementing and administering such a program, and, based on discussions with state social service agencies, it appears that this would be an unwieldy way to administer a DTV subsidy. One challenge would be finding locations for distributing the equipment. We heard from several officials whose state agencies administer benefit programs that using local social services offices as a distribution point would not be feasible. These officials cited the lack of space and staff resources to store, secure, and distribute equipment as reasons why local offices could not be used to administer such a program. Further, stakeholders told us that government distribution does not take advantage of existing retail supply chains that already move large quantities of goods to stores throughout the country. While a government distribution program would not require households to pay for equipment in advance of receiving the subsidy, which would be beneficial to low-income households, the program could present other challenges to those eligible to participate. For example, stakeholders we interviewed told us that a distribution program limits consumers’ choices and provides no mechanism for consumers to obtain support if the equipment does not work properly. Additionally, officials from one state agency told us that people obtaining equipment at local offices would have to wait in long lines, which could be problematic for those with physical limitations, such as the disabled and the elderly. Voucher Program: Another mechanism to subsidize DTV equipment could be through a voucher program. A voucher—which is a coupon or electronic benefit card, similar to a credit card, which provides purchasing power for a restricted set of goods or services—could be provided to households that qualify for a DTV subsidy. The federal government has used vouchers to provide a variety of assistance to households, such as food stamps and housing subsidies. Also, vouchers have been used on a limited basis to provide benefits to consumers for the changeover of certain technology. For example, the Colorado Department of Human Services provided a voucher to individuals who qualified as hard of hearing to purchase text telephones and other specialized telecommunications equipment. For a DTV equipment subsidy using a voucher system, various administrative steps would be necessary to design and implement an effective program. After decisions were made about the specific equipment to be covered, vouchers would need to be distributed to eligible households. Several of those we contacted noted that if the program is to be means tested, state agencies—such as those that administer the Food Stamp Program—might be able to mail vouchers to their existing recipients. Additionally, with a voucher program, several administrative steps involving the retail industry would be required. Participating retailers would have to know how the program is structured, which specific items were covered by the subsidy, approximately how many pieces of DTV equipment were expected to be subsidized in a particular area, and how the mechanism for retailer reimbursement would operate. Overall, using vouchers to administer a DTV subsidy might be beneficial for low-income households because such households would not be required to pay for the DTV equipment in advance and then wait to be reimbursed. However, stakeholders told us that this type of program could create a burden on retailers because they must determine the authenticity of the vouchers. Also, stakeholders mentioned that it might be more challenging to include smaller and independent retailers in a subsidy program that uses vouchers. Rebate Program: A rebate program could also be used to administer a DTV subsidy. Rebates generally require consumers to pay the full cost of an item at the time of purchase and then send documentation to an address specified by the manufacturer or retailer to receive a rebate by mail. The documentation required generally includes the original sales receipt, the UPC code from the product packaging, a rebate slip, and the customer’s name, address, and telephone number. In most cases, this paperwork must be sent within 30 days of the purchase, and consumers generally receive their rebates up to 12 weeks later. According to the three rebate experts we interviewed, only about 30 percent of rebates are ever redeemed. While two rebate experts said that redemption rates would likely rise with a larger rebate, such as might be provided with a DTV subsidy, none of the three we spoke with believed that the redemption rate would rise above 50 percent. Also we were told that depending on the type of rebate, on average 1 percent to 20 percent of rebate applications are rejected based on the lack of proper documentation. Typically, a variety of decisions are made in developing a rebate program. For example, as we discussed these decisions with stakeholders, various methods of implementing a rebate were highlighted, including placing the rebate coupon inside the equipment box, affixing it to the outside of the box, or printing a coupon at the cash register at the time of sale. The method used would, in part, determine which entities have some administrative responsibility for the rebate program. If a DTV subsidy program were designed to have a rebate coupon placed in or on the box, it would be the responsibility of the manufacturer to do so, while if it were designed to have a rebate coupon generated at the cash register, the retailer would be responsible for managing this process. A consensus on the best rebate method did not emerge from our interviews with industry experts. One of the most difficult elements associated with using a rebate for a DTV subsidy would be applying eligibility requirements. As previously discussed, information about over-the-air and low-income eligibility is not readily available to the rebate fulfillment houses—which are the entities that process rebates for manufacturers and retailers—and there are legal obstacles to the government collecting and providing that information to them. Another downside of rebates is that consumers generally pay the full cost of an item at the time of purchase, which could create a hardship for low-income households. Furthermore, one rebate fulfillment center representative told us that low-income individuals are less likely to redeem rebates than other segments of the population. Similarly, an official from a state agency told us that based on her experience a rebate program is not a good choice if the subsidy is supposed to target low- income individuals because many low-income individuals are not comfortable with rebates and will not redeem them. If eligibility for the subsidy is not restricted, a rebate might provide a good delivery mechanism. A benefit of using a rebate program for a DTV subsidy is that this method could take advantage of the relationships that already exist between retailers, manufacturers, and the rebate fulfillment industry. We identified several government programs that have used or are using rebates or vouchers to subsidize consumers’ purchase of products. While aspects of these programs might provide insight into the establishment of a DTV subsidy, we found, overall, that the programs we reviewed differed in many respects from what might be undertaken for a DTV subsidy. We reviewed three rebate programs that were implemented by local governments to provide incentives for furthering a policy goal, such as clean air, water conservation, and the use of energy-efficient appliances. We also reviewed three voucher programs, including one state program that subsidizes equipment for deaf and hard of hearing citizens and two federal programs that provide assistance to needy households to purchase food. See table 1 for key information about the six programs we reviewed. We believe some aspects of the programs’ implementation, such as the time required to develop a program and the manner in which program information was disseminated, might have relevance to the establishment of a DTV subsidy. For example, for two of the rebate programs, we learned that it took several months to develop and implement the programs, with one rebate program taking 12 months and another taking 18 months to implement. In reviewing various other aspects of the programs, such as eligibility determinations and what products were subsidized, we found that differences existed between the voucher and rebate programs that might also provide some insight for a DTV subsidy. For example, for all of the voucher programs we reviewed, benefits were targeted to low-income individuals, and eligibility was specifically defined. In contrast, eligibility for the rebate programs not based on income; rather, a person only had to reside in the location where the subsidy was being offered or be a water or power customer to be eligible. We also found differences in the types of products subsidized for the rebate and voucher programs that we reviewed. Whereas the rebates subsidized items in an effort to further a policy goal (generally environmental protection), the voucher programs provided recipients with items for their basic needs. Overall, however, we observed that aspects of these programs’ implementation are dissimilar to what might be undertaken for a DTV subsidy. First, choosing not to participate in any of the programs we reviewed would not cause a household to lose any existing service or functionality. In contrast, if a household chose not to take advantage of a DTV subsidy for which it was qualified, and then did not obtain the necessary equipment to receive broadcast digital signals, the household might lose access to broadcast television signals when the transition occurs. Additionally, none of the rebate programs we reviewed are comparable to the size of a potential DTV subsidy in terms of number of people served. While the national voucher programs serve millions of households, they are unlike the DTV subsidy in that they are long- established programs with an entire infrastructure designed to provide benefits to recipients on a recurring monthly basis. Due to differences in the scope of the rebate and voucher programs we reviewed and a potential DTV subsidy, it is not clear how applicable the administrative costs of these programs are to estimating the costs of a DTV subsidy. If a subsidy program is implemented, it will pose many challenges for the implementing agency and industry. However, there are other aspects of the DTV transition not related to the implementation of possible subsidy program that are ongoing and will take time to complete or may pose their own challenges. For example: Under current FCC time frames, the final process for television stations to select their permanent channel placement for their digital signals is ongoing. Broadcast stations began the process of choosing their final DTV channel in February 2005. In August 2006, FCC expects to issue a Notice of Proposed Rulemaking that includes a tentative DTV Table of Allotments once the channel election process is finished. FCC will seek comment on the proposed Table and then issue an order with a Final DTV Table of Allotments, which, at a minimum, would take several months. An FCC official told us that it would likely be sometime in 2007 before all the allotments are finalized. In order for the DTV Table of Allotments to be finalized by the end of 2006, FCC officials told us that they would need to shorten the channel election process time frames that they currently have in place. We were told that once stations know their final channel assignments, they might need to make adjustments to certain equipment. Therefore, we found that for stations that do not have certainty on their assignments until sometime in 2007, equipment modifications will be undertaken well into that year. Currently, a small number of television stations are not yet broadcasting digital signals. FCC told us that issues of technical interference and the permitting process for locating and constructing broadcast towers are the primary reasons these stations are not yet online with a digital broadcast signal. For example, for any station located within 200 miles of the Canadian border, coordination and approval from the Canadian government is required, in accordance with international treaties. At present, no requirements for the application of the Emergency Alert System (EAS) apply to stations’ digital broadcast signals. FCC is now considering how requirements will be set. An FCC official told us that rules for EAS on DTV stations that are similar to requirements for analog stations should be developed within a few months, but additional work will look at whether there will be expanded functionality required in the digital environment. According to FCC, the equipment that stations will be required to purchase to meet the basic requirements that are likely to be set before the end of 2005 is not very expensive. Because the requirements for expanded functionality are not yet set, an FCC official told us that it is not clear what the cost of any additional equipment will be. Another challenge that may be posed by the DTV transition relates to antenna reception of digital over-the-air broadcast signals. Many stakeholders said that antennas currently used to view analog over-the-air signals will be sufficient to receive DTV signals and an FCC official told us that many viewers will have improved picture quality with digital signals. However, a few indicated that improved antenna technology may be needed for some households. An antenna manufacturer, a broadcaster, a retailer, and other stakeholders said that the ability to receive digital over- the-air signals is variable and contingent on each household’s geography, among other things, and that some people may need new antennas or adjustment of existing antennas. In particular, we were told that adjusting the antenna to receive digital broadcast signals can be more difficult than analog signals because if the antenna is not aimed correctly, the television may not be able to display any signal. Also, while interference from trees, buildings, and other structures can distort an analog picture, this type of interference can cause a complete loss of digital signals. Ensuring that households understand the transition and how they will be affected is critical to a smooth transition. Any household that does not understand what will occur could be adversely affected. Over-the-air households are the most likely to be impacted by the transition because, to whatever extent cable subscribers will be affected, they will likely have support and information provided by their subscription video providers. Based on our work, other specific populations might also be more difficult to reach with needed information about the transition, including low- income households and those who do not speak English as a first language. The consequences of any information gaps are serious because households could lose their access to television signals. During our work on the transition to DTV in Berlin, Germany, we found that an extensive information campaign was widely viewed as critical to the success of the transition. There are many difficult decisions and determinations that will likely be considered if a subsidy program for DTV equipment is developed. In addition, there are unique interfaces between the challenges we identified and the administrative method used to deliver the subsidy that will require careful consideration. For example, if such a program were developed and eligibility were limited to only low-income individuals, it might be advantageous to leverage the infrastructure and expertise that state social service agencies have in providing assistance to needy households. But to utilize the state agencies, the subsidy might need to be provided in the form of a voucher because the state agencies have experience mailing information and could mail a voucher to the low-income recipients of other assistance. In contrast, if there were no eligibility restrictions applied to the subsidy, a rebate might be a good method for administering the subsidy because it would draw on the existing relationships between manufacturers, retailers, and rebate fulfillment companies, all of whom have extensive knowledge and experience in developing, advertising, and implementing rebates. However, such a design might render the subsidy less usable by low-income Americans. The return of the spectrum for public safety and commercial purposes is a critical goal for the United States. Implementing a subsidy program for DTV equipment poses a variety of difficult challenges and may not be the only policy option that could help advance the overall goal of reclaiming spectrum. Given the importance of this transition, it seems critical for knowledgeable officials in government and in industry to work together to find the best means to address any issues that may impede progress in completing the DTV transition—and the associated reclamation of valuable radiofrequency spectrum. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For questions regarding this testimony, please contact Mark L. Goldstein on (202) 512-2834 or goldsteinm@gao.gov. Individuals making key contributions to this testimony included Amy Abramowitz, Michael Clements, Andy Clinton, Simon Galed, Eric Hudson, Bert Japikse, and Sally Moino. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The digital television (DTV) transition offers the promise of enhanced television. At the end of the transition, radiofrequency spectrum currently used for analog broadcast television will be used for other wireless services and for critical public safety services. To spur the digital transition while preventing any loss of television service to households, some industry participants and experts have suggested that the government subsidize DTV equipment to enable households to view digital broadcast signals. This testimony provides information on (1) some challenges to administering a subsidy program for DTV equipment, (2) some administrative options for implementing a DTV subsidy, (3) examples of government programs that make use of rebates or vouchers to provide subsidies, and (4) other efforts necessary for the completion of the DTV transition. We discussed administrative challenges to and options for a DTV subsidy with federal and state government officials, electronics manufacturers and retailers, and experts in product promotion. As in our previous work, we take no position on whether a subsidy should be implemented or not, or whether, if a subsidy program is established, it should be implemented in any particular way. While policies other than a subsidy might help promote the DTV transition, any other such approaches were not part of this investigation. We found that several administrative challenges might arise in implementing a subsidy for DTV equipment. One of several key challenges we identified would be determining those eligible to receive a subsidy. If the subsidy were restricted to low-income households or to households that rely exclusively on over-the-air television, methods to identify these households would need to be developed and may prove to be challenging. Another key challenge would be ensuring that eligible recipients understand the availability of a subsidy, how they could obtain it, and what equipment would be subsidized. Effectively communicating this information will likely first require that information about the DTV transition itself is successfully communicated to the public. Several administrative options could be used to provide a government subsidy to help households obtain DTV equipment, including a refundable tax credit, government distribution of equipment, a voucher program, and a rebate program. The suitability of any of these methods depends on aspects of the subsidy's design, such as which entity is most appropriate to administer the subsidy and who would be eligible to receive the benefit. Various government programs make use of rebates or vouchers to subsidize consumers' purchase of products. We reviewed three rebate and three voucher programs that might provide insight for the development of a DTV subsidy and found that differences existed between these types of programs. We observed that eligibility for the voucher programs was specifically defined and the benefits were targeted to low-income individuals, whereas eligibility for the rebate programs was not based on income. Overall, however, we found these programs differed with respect to what might be undertaken for a DTV subsidy. In addition to the administrative challenges of a subsidy program, there are other aspects of the DTV transition that are ongoing and will take time to complete or may pose their own challenges. For example, the channel election process, which will determine each television station's channel placement for its digital signal, will not be final until sometime in 2007, according to the Federal Communications Commission. Another issue that might arise relates to antennas used to receive digital broadcast signals. Although many stakeholders believe that antennas used for analog reception will work well for digital signals, we were also told that reception of digital signals may vary on the basis of a household's geography and other factors.
Numerous agencies at the federal, state, and local levels with varying missions monitor or supervise individuals. Criminal justice agency missions that require monitoring include pretrial and post-trial services, probation and parole services, and immigration enforcement. For pretrial services, judicial agencies monitor defendants at the discretion of the court for a period of time preceding a scheduled court date. Other criminal justice agencies monitor offenders as an alternative to detention. For instance, probation agencies typically monitor offenders whom courts place on supervision in the community, in lieu of incarceration. The Department of Homeland Security’s Immigration and Customs Enforcement agency monitors certain aliens prior to adjudication hearings or deportation. With regard to post-trial monitoring, parole agencies monitor offenders who are conditionally released from prison to serve the remaining portion of their sentences in the community. There are many supervisory and monitoring methods, manual and electronic, used by criminal justice agencies. See figure 1 for several of these methods. Manual methods are routinely used to supervise offenders, including employment verification, compliance searches, narcotic testing, clinical treatment, home or field contact visits, and stakeholder collaboration. There are various programs that require close supervision of individuals, most predominantly state and local probation or parole agencies’ monitoring of selected offender populations (e.g., gang-related and sex offenders). Therefore, as a supplement to the traditional manual methods, many criminal justice agencies use electronic monitoring technologies. Electronic monitoring includes technologies that track individuals’ physical location to help supervise compliance with program requirements designed to ensure public safety. These technologies are not designed to replace manual methods. Rather, they are one tool used in concert with other methods for monitoring offenders. Electronic monitoring technologies include voice verification, radio frequency monitoring, and GPS. Voice verification refers to voice recognition technology that can verify the identity of an individual. Applications include low-risk offenders self-reporting their status by telephone. Radio frequency monitoring involves a device that detects a signal connected to a home telephone (landline), so that authorities can ensure that an offender is at home. However, authorities will not know the location of the offender if he/she leaves. GPS is a U.S.-owned utility that provides users with positioning, navigation, and timing services. The frequency with which GPS data are collected and reported can vary. Passive tracking technology collects and stores location and status data, which are reported retrospectively. Active tracking technology can accomplish near-real-time collection and reporting of location and status data. DOJ’s Office of Justice Programs (OJP) works in partnership with the justice community to provide information, training, coordination, and strategies for addressing crime-related challenges. NIJ is an office of OJP that acts as the research, development, and evaluation agency of DOJ. NIJ’s mission is to provide objective and independent knowledge and tools to reduce crime and promote justice, particularly at the state and local levels. The NIJ Policy, Standards and Grant Management Division develops and publishes voluntary consensus equipment standards that specifically address the needs of law enforcement, corrections, and other criminal justice agencies. OTS is an electronic monitoring technology consisting of hardware, such as an ankle bracelet (see fig. 2), used for collecting and transmitting data on an individual’s location, and software for analyzing data collected from the hardware device. As written in the current draft, the OTS standard pertains to devices using passive tracking or active tracking technology, such as GPS. See figure 3 for a graphical depiction of how the components of GPS-based OTS interact to collect and transmit location data. To develop the OTS standard, NIJ established the Advisory Working Group and the Special Technical Committee (STC). The AWG reviews the work of the STC and provides high-level guidance on issues that affect users, service providers, and manufacturers. It is composed of senior-level representatives from selected stakeholder groups and individuals experienced in standards development. The STC’s role is to identify requirements for OTS technology, consult with leading manufacturers, and develop minimum performance requirements and associated testing methods for equipment certification. The STC is composed of criminal justice practitioners and subject matter and technical experts. See figure 4 for NIJ’s organization that supports the development of the OTS standard. In addition to the OTS minimum performance requirements documented in the draft standard, the STC has drafted companion documents to provide guidance on implementing offender tracking programs and OTS equipment certification programs. Specifically, the Criminal Justice Offender Tracking System Selection Application Guide provides guidance about the functionality, selection, use, and maintenance of OTS. The Criminal Justice Offender Tracking System Certification Program Requirements and the Criminal Justice Offender Tracking System Refurbishment Service Program Requirements addresses accreditation requirements for certification bodies. There are numerous accredited national and international standard development organizations that have published thousands of equipment standards in use today. ANSI, which has accredited over 200 standard development organizations, requires adherence to a general approach displayed in figure 5 when developing American standards. NIJ collaborated with stakeholders by leveraging expertise from a broad variety of criminal justice and technical experts. However, earlier and continued collaboration with OTS manufacturers could have better informed and facilitated development of the OTS standard. Coordination between NIJ and manufacturers has since improved, and manufacturers’ major concerns have been addressed. NIJ’s process for developing the OTS standard is consistent with ANSI criteria for accrediting organizations. For instance, NIJ sought and involved participants from diverse backgrounds with the objective of achieving a balance of interests. Participants in the OTS development process include criminal justice practitioners from all levels of government representing parole, probation, and pretrial services agencies. NIJ also made efforts to leverage any national or international standards that apply, and solicited and incorporated feedback on the draft standard and companion documents through two public comment periods. In particular, NIJ formed working groups by appointing members who represent the OTS user community, relevant fields of technology, and affected professional associations. For example, NIJ created the STC and the AWG to inform the development of the OTS standard. In addition, NIJ efforts extended to collaborating with subject matter experts such as ones in the U.S. Air Force and the National Institute of Standards and Technology (NIST)—leveraging both organizations’ technological backgrounds. For example, the Air Force contributed information on GPS for the STC’s consideration so that the STC could more fully understand the technology. Similarly, NIST also contributed its technical expertise related to its ongoing work with location and tracking systems. While the standards development process NIJ employed for developing the OTS standard is consistent with the process outlined by ANSI, earlier and ongoing inclusion of OTS manufacturers could have expedited development of the OTS standard. See figure 6 outlining selected events throughout the OTS standard development process. The Guide to the Project Management Body of Knowledge emphasizes the importance of considering stakeholder equities and ensuring their ongoing involvement throughout the entire project life cycle. It recognizes that stakeholders’ views and interests can be varied, and states that overlooking the views of a stakeholder that will be negatively affected can result in an increased likelihood of failure, delays, or other negative consequences to a project. NIJ’s approach for developing the OTS standard is described by agency program officials and STC members as practitioner-driven. Practitioners are those who use OTS equipment when tracking the location of individuals. Initially, STC practitioners created a list of criminal justice needs that they sought to be addressed through OTS technology. Subsequent to this assessment of needs and the development of corresponding equipment performance requirements, the technical experts on the STC were tasked with developing corresponding test methods. In May 2011, approximately 1-1/2 years after the development process began, manufacturers, who are to voluntarily ensure their equipment conforms to the standard, had a means to formally provide their input. Specifically, on May 12, 2011, NIJ held a manufacturer’s workshop to seek manufacturer input on the standard. According to manufacturer representatives with whom we met, manufacturers expressed significant concerns related to the feasibility of many requirements and associated testing methods in the OTS standard. For example, two manufacturers we met with reported that it was unlikely that existing OTS equipment in the market could pass performance requirements in the draft standard as written, since current technology did not meet the expressed need. This is particularly important to the manufacturer community, as the manufacturers are the ones that ensure their equipment meets requirements in the standard and bear any related costs and market consequences if their equipment does not meet the standard. Similarly, NIJ had not identified the need for refurbished equipment certification program requirements. This is significant, as refurbished equipment is routinely provided by OTS manufacturers as part of their service agreements with government agencies. Approximately 1 year after the manufacturers’ workshop, NIJ had not provided feedback to manufacturers regarding their concerns. Therefore, OTS manufacturers were not aware that NIJ had taken action to incorporate their concerns into the draft OTS standard based on their review of the draft standard circulated during the first public comment period. On July 18, 2012, in a joint letter to NIJ nearing the conclusion of the first public comment period, a group of manufacturers wrote the following, “The Manufacturers are very concerned that we have received absolutely no feedback regarding the information we provided to the , and that nothing has been incorporated into the standard.” NIJ officials we met with reported that they considered manufacturer input. Specifically, they reviewed manufacturer comments received at the 2011 workshop as well as those received on the first OTS standard draft during the public comment period from June 6 through July 23, 2012. However, at the time, NIJ officials told us that they were focused on working to address comments from all stakeholders and, therefore, did not immediately communicate to manufacturers if or how their comments were being addressed. We reviewed revisions made to the OTS standard since the first draft and formal comments submitted in response to both the first and second comment periods along with NIJ’s responses, and met with STC members and selected manufacturers. According to our review, earlier and ongoing involvement of OTS manufacturers in the standard development process could have better informed and expedited the OTS standard development process. OTS manufacturers could have contributed to NIJ’s overall understanding of the technology at the forefront of the process since they act as both developers and service agreement providers to numerous government agencies. For example, OTS manufacturers could have better informed and facilitated development of the OTS standard by providing insights on OTS capabilities and limitations at the outset. Manufacturers could have further clarified whether existing OTS technology could meet each performance requirement and testing method shortly after being conceived by the STC members rather than after the first draft of the OTS standard had been developed. For instance, the detection of certain methods used by offenders to avoid location monitoring are either not fully developed or available to all manufacturers. While the OTS standard and associated testing methods remain under development, coordination between NIJ and manufacturers has improved since 2012. For example, through the second public comment period for the draft standard, NIJ has communicated to the manufacturers that their major concerns related to minimum performance requirements and testing methods have been addressed. In addition, according to NIJ officials, at the end of the public comment periods, NIJ reached out to each manufacturer that provided comments. On the basis of our analysis, the current draft OTS standard and changes proposed in response to the second public comment period generally reflect input manufacturers have provided NIJ. For instance, as a result of stakeholder input, the STC has developed refurbishment service program requirements, and it has also revised certain performance areas in the draft standard as optional based on available technology. NIJ is currently in the final stages of OTS standard development and plans to issue the standard by March 2016. As NIJ works to finalize the standard, it has invited manufacturers to participate in assessing the viability of test methods to be used when validating whether an OTS meets requirements set forth in the standard. Specifically, it has asked manufacturers to provide samples of their equipment. At least one manufacturer we met with is participating in this process by providing its OTS equipment for testing, and NIJ reports that an additional two manufacturers have as well. NIJ’s draft OTS standard sets minimum performance standards that address common operational and circumvention detection needs identified by the 9 criminal justice agencies from which we collected procurement and policy documents. Agencies’ specific performance requirements varied and were sometimes more or less rigorous than the draft standard, based on factors such as the type of offender supervised and environmental conditions in their jurisdictions. Furthermore, these agencies did not always define performance requirements corresponding to their needs, such as specific location accuracy requirements. By setting minimum requirements for a range of commonly identified offender tracking system needs, the standard could help agencies more thoroughly consider and develop contractual requirements and help ensure their needs will be met. Officials from all of the ten agencies we selected stated that implementing a standard would be beneficial because, among other things, it could provide objective information on performance that could inform their procurement processes. Agencies we reviewed, at times, also defined additional requirements specific to their circumstances that are not in the draft standard, such as a two-way communication feature that allows the offender and officer to speak to each other. NIJ officials stated the standard is meant to address performance needs that are common to a broad range of agencies. The draft standard addresses common operational and circumvention detection needs, such as location accuracy, the ability to obtain an offender’s location on demand, programming “zones”—geographical areas an offender is or is not to enter— and alerts to report device tampering, among others. Some of these operational and circumvention detection needs are discussed below. See appendix I for additional information on specific requirements in the draft standard and summary data on the extent to which the requirements met stakeholder needs. Testing Conditions Environmental factors, such as cloud cover, could affect the performance of offender tracking systems. To help ensure replicable and fair testing, the draft standard defines specific conditions for testing each performance requirement. For example, the outdoor location accuracy test is to be performed when a minimum cellular speed is achieved, there is a clear view of the sky, and there is limited cloud cover, among other conditions. Location accuracy. One of the primary objectives of OTSs is to continuously track the location of offenders. NIJ’s draft standard includes performance requirements for both indoor and outdoor location accuracy. Specifically, it calls for OTS to provide a location that is accurate within 10 meters 90 percent of the time in an open air environment with no obstructions. It also calls for OTS to provide a location that is accurate within 30 meters 90 percent of the time when placed in an 8-foot by 8-foot single-story structure. The nine agencies we reviewed identified location accuracy as important, but none of the agencies had developed a specific accuracy requirement. The officials from the agencies we interviewed also noted that they must track offenders in a variety of settings, such as urban areas with high-rise buildings, which are not accounted for in the draft standard. However, the NIJ and STC members responsible for developing the standard stated that adding additional types of indoor environments would increase the cost of testing. NIJ and manufacturers agreed that it is important that the tests not be too costly so that manufacturers would voluntarily participate in the standard and consumer prices would not be significantly affected, since the cost of testing could be passed down to users. Furthermore, as discussed later in this report, there are inherent limitations to the GPS technology that prevent it from always providing accurate location data in certain conditions, and NIJ’s guide provides additional information on addressing these challenges. On-demand location. On-demand location allows agencies to determine the most recent location of an offender. The draft standard calls for OTSs to be able to provide an on-demand location within 3 minutes of a request. Five of the nine agencies we reviewed defined an on-demand location requirement, with two of the five agencies specifying that they require the ability to instantly receive an offender’s location and status. Representatives from all three manufacturers with whom we met stated that their OTSs cannot provide “instant” location updates because of limitations including GPS and cellular technology, and that while quicker response times are possible, the 3-minute time frame is a reasonable requirement for the minimum performance standard. More specifically, these representatives emphasized that the 3-minute time frame is appropriate because of the number of steps that must occur to obtain an offender’s location. Such steps include, for example, the software calling out to the tracking device through a cellular network to acquire data, the device collecting the GPS satellite signals to acquire location data, calculating location data, and transmitting the location data back to the agency. Zones. An important feature of OTSs is the ability to develop zones. As shown in figure 7, inclusion zones are geographic areas where an offender is scheduled to be, such as home or work; exclusion zones are geographic areas where the offender is not permitted to visit, such as a victim’s home, schools, or outside the state or county border. The draft standard calls for OTSs to configure zones in the shapes of circles, rectangles, and arbitrarily shaped polygons, as well as be able to have zones within zones. Officials from one agency explained, for example, that it was important that they be able to draw precise exclusion zones around areas such as schools to prevent the system from alerting when the offender is driving by the location. The draft standard also calls for OTSs to generate zone templates that store a minimum of 50 predefined inclusion or exclusion zones, which agencies can apply to any offender. Officials from one agency explained that zone templates are useful when common exclusion zones such as county and state borders or schools need to be applied to many offenders. The zone shape and zone template requirements in the draft standard are more comprehensive than any of the requirements established by the nine agencies we reviewed. For example, eight of the nine agencies we reviewed did not define specific zone shape requirements. Alert notifications. Another important feature of OTSs is to provide alerts to notify an agency of a number of different events. These events include, among others, occasions when an offender tampers with the tracking device by cutting it off or trying to remove it by stretching it over his or her foot, an offender violates zone rules by crossing the border of an exclusion or inclusion zone, the GPS location is lost; cellular communication is lost; and when the tracking device battery is low. Alerts for tampering with the device and low battery are particularly important because cutting the device off and letting the battery die were the most common circumvention methods reported by officials at eight of the nine agencies we reviewed. Tamper and zone violation alerts: The draft standard calls for the OTSs to provide alerts within 3 minutes of an ankle strap being cut and within 4 minutes of ankle strap stretching and zone violations. The tamper alert requirements in the draft standard are consistent with the requirements established by five of the nine agencies we reviewed. Similarly, the zone violation alert requirements are consistent with the requirements established by four of the nine agencies we reviewed. The remaining agencies established requirements for immediate notification of tamper events and zone violations, though they did not define a time parameter for “immediate.” Representatives from the three manufacturers we met with stated that as with the on-demand location feature, instantaneously sending alert status information is not currently feasible with their OTSs. Rather, the 3- to 4-minute maximum time frame in the draft standard for producing an alert was feasible and would sufficiently test for the OTSs’ ability to provide a near real-time alert. NIJ officials explained that this time frame was determined by the practitioners on the STC and balances their performance needs with the state of the technology. Loss of GPS and cellular alerts: The draft standard requires an alert within 4 minutes of loss of GPS or cellular communication. This time period was consistent with the requirements established by all of the agencies we reviewed that had defined such requirements. Officials from one agency we met with explained that GPS and cellular communications are lost frequently in their jurisdiction, in areas such as subways, large office buildings, and basements. Therefore, this agency required an alert notification after a number of hours without GPS or cellular communications to avoid overwhelming officers with alerts. Another agency we met with did not require any alerts for loss of GPS or communications because it supervised offenders who were not on probation or parole. In recognition that agencies may wish to delay alert notifications in areas where offenders often lose cellular communications, the draft standard also calls for OTSs to have the ability to alert after communications have been lost for 1 hour. NIJ officials explained that STC members included the 1-hour alert requirement in the draft standard to reflect a more typical time frame used by practitioners. They further stated that agencies could continue to request shorter or longer notification requirements from their OTS vendors based on their individual needs. Low battery alert: The draft standard calls for OTSs to provide a low battery alert prior to the battery completely discharging, but it does not specify exactly when this alert is to occur. Eight of the nine agencies we reviewed required a low battery alert, but the time period for when they wanted to receive the alert varied. The draft standard also addresses other battery performance needs, such as battery life. For more information see appendix I. Optional circumvention requirements. Metallic shielding is the use of metallic material to block GPS signals. Jamming is the use of an electronic device to block GPS or cellular signals. Both of these circumvention methods can prevent agencies from tracking an offender’s location. The draft standard includes optional performance requirements for the detection of metallic shielding and jamming. According to members of the STC, these requirements are optional because only one manufacturer offered jamming detection capabilities and had developed and patented shielding detection capabilities at the time the standard was being drafted. Further, they believe it is important to have a standard with performance requirements in which several manufacturers would voluntarily participate. According to our review, one of the nine agencies required metallic shielding and GPS jamming capabilities as part of its procurement process. Officials from eight of the nine agencies reported that shielding and jamming were not considered common circumvention methods. However, officials from one agency explained that jamming may be occurring, but they did not have evidence, such as recovered jammers or alert data, to support that it is a common occurrence. Officials from the nine agencies generally agreed that making shielding and jamming detection optional performance requirements is reasonable. While one of the nine agencies established a shielding or jamming requirement, officials from five of the eight agencies that had not established such a requirement stated that these circumvention detection capabilities are or could be useful. Historical data. OTSs generate a considerable amount of data on each offender. The draft standard calls for historical location data, status of all alerts, and offender identifiers to be exported into a defined comma- delimited text file, a widely used format. All nine agencies we reviewed had established a requirement to have access to historical data. Officials from these agencies stated that accessing historical data is important because the data could be needed as evidence in an investigation, for example. In addition to the requirement to make historical data available, some agencies also specified particular business practices, such as record retention time frames. For example, one agency required that the OTS data be retained for 7 years. NIJ’s guide also provides further guidance on retaining offender tracking data, including taking into account federal, state, and local laws or policies that require certain data be maintained for a specific number of years. Robustness. OTS devices are worn on the body and may be subject to wear and tear and a number of different environmental conditions, depending on factors such as where the offender lives and works. The draft standard calls for OTSs to function properly after being exposed to extreme temperatures ranging from —4 degrees Fahrenheit to 122 degrees Fahrenheit, immersed in 2 meters of water, undergoing different shock tests, and exposure to vibration, among other things. One agency we reviewed had not defined any robustness requirements and none of the remaining eight agencies had established as many or as specific robustness requirements as those in the draft standard. For example, seven agencies required the OTS device to be shock resistant, but did not define what this meant. In addition, none of the agencies had established vibration exposure requirements. However, four agencies established robustness requirements that were more rigorous in certain areas. For example, one agency required the device to be waterproof up to 50 feet, while another agency called for the device to function in conditions up to 135 degrees Fahrenheit. In addition to the performance areas identified as part of the draft OTS standard, the 9 agencies we reviewed also had a variety of individualized needs. These needs were not, however, consistent across agencies. For example, 3 agencies required the OTS to have motion detection. Officials from 1 agency explained that a no-motion alert could indicate that the offender is experiencing a medical emergency or has removed the ankle bracelet. In addition, 1 agency required an OTS with two-way communication that would allow the offender and officer to speak to each other. Officials from this agency said that this has been a useful tool that has enhanced offenders’ compliance. One agency also required victim support tools such as beepers or cell phones to notify victims of pertinent alerts from their offenders’ tracking systems. Further, agencies had different analytical requirements. For example, 1 agency required the ability to automate crime scene correlation analysis. Crime scene correlation analysis involves comparing offenders’ location data against the locations of crimes to identify potential suspects or witnesses. Another agency required analysis tools to identify common places at which each offender spends time. Officials from this agency explained that they use the analysis to help find offenders in the event that they abscond. NIJ and STC members stated that the standard is meant to establish minimum performance requirements that would be common to a broad range of criminal justice uses. Further, they stated that agencies could continue to specify additional requirements beyond those in the standard as part of their individual procurement processes. In addition, the technical experts on the STC with whom we met stated that as OTS technology advances, the common needs of agencies may also change. It would, therefore, be important to periodically reassess the minimum performance requirements in the standard to determine if they are still valid or if they should be changed to address changes in practitioners’ needs or advances in technology. This is consistent with NIJ’s standard development process, which calls for standards to be reevaluated every 3 to 5 years. Officials from the 10 criminal justice agencies we met with also identified programmatic challenges with implementing offender tracking programs, such as managing public expectations of what the technology can achieve, as well as technical limitations that could affect the success of their offender tracking programs. NIJ’s draft guide provides information and guidance on these challenges and other considerations. In recognition of the range of agencies, environments, resources, and objectives of offender tracking, the draft guide does not offer “one size fits all” solutions. Challenges commonly cited by officials from the 10 agencies we met with included public expectations, establishing response protocols, and managing workloads. The draft guide discusses these and other programmatic considerations that can affect the success of an electronic monitoring program. Public expectations. One of the challenges officials cited was misconceptions among the public about how offender tracking programs operate. According to officials, common misconceptions include the beliefs that (1) officers are stationed at computers and watch the live movement of offenders 24 hours a day, 7 days a week, and (2) offender tracking technology allows officers to prevent bad behavior before it happens. Investigating Crimes Global Positioning System (GPS) data from offender tracking systems (OTS) can be used to help investigate and solve crimes. For example, Officials from 1 agency we met with reported that two sex offenders were identified as suspects in the killing of four women in California based upon the GPS data collected from the OTS, which placed them at the crime scenes. OTS data can also help eliminate offenders as suspects. In Florida, the mother of an abducted boy pointed to a sex offender who lived in the vicinity as a suspect. The GPS data collected by the OTS showed that the offender had not been at the boy’s location and helped law enforcement exclude the offender as a suspect. Agency officials reported that officers are rarely stationed at computers watching the live movement of offenders. Instead, as one of many supervisory and monitoring methods, practitioners commonly rely on OTS devices collecting location information and developing alerts that notify them when offenders may be violating restrictions imposed upon them. While OTS devices do collect data on offenders’ location, the information is not sufficient for officers to make definitive conclusions regarding offenders’ behavior. As officials from 1 agency noted, offenders can commit crimes without setting off any alerts. In addition, some offenders may purposely keep the device on to prevent alerting authorities prior to or while they commit a crime. Furthermore, even if an offender sets off an alert, an agency may not respond immediately. Response time depends on the alert protocols established by the agency and factors such as staffing and resources, as discussed later in this report. Although OTSs may not deter or prevent all offenders from recidivating, officials from 1 agency emphasized the important role GPS location data can play in providing evidence to solve crimes. Understanding key aspects of how offender tracking programs operate is particularly important for victims. Representatives from the victims’ rights organizations we met with explained that victims should understand the limitations of the technology so they do not develop a false sense of security. The draft guide contains a section on managing media relations to inform the public of the agency’s mission, policies, and practices. It advises agencies to provide proactive updates on the program and have a plan to communicate to the media in the event that a critical incident occurs. Response protocols. Officials from the agencies we met with told us that it was challenging to develop appropriate response protocols that balance the likelihood of risk to public safety with available resources. Officials reported that alerts for loss of GPS, cellular communications, and low battery can occur frequently, even when the offender has no intention of circumventing tracking. Responding to all such alerts can overwhelm officers, according to officials with whom we met. To help reduce officers’ alert workload, 1 agency we met with set up its OTS to generate an exclusion zone alert only after multiple consecutive location points were collected within an exclusion zone. The officials explained that this reduced the number of alerts caused by inaccurate location data and situations where the offender was driving by an exclusion zone. On the other hand, reducing the number of alerts officers receive may increase the risk that an offender will be able to circumvent tracking or commit a new crime. One victims’ rights group representative noted that an agency can have the best OTS technology available, but it will not help protect the public if the agency does not use or respond to the data it generates. Critical incidents in which offenders with a GPS tracking device have committed serious crimes, including rape and murder, have caused some agencies to reassess how they respond to alerts and oversee their programs. For example, officials from 1 agency’s regional office decided to receive tamper alert notifications only after the device had been in a tamper status for 5 minutes. The 5-minute time period was chosen to help prevent alerts not indicative of a violation, such as frequent impact to the device as a result of the offender’s work environment. However, this delayed notification was inconsistent with the agency’s national policy and resulted in one offender being able to generate a series of tamper alerts over several weeks that lasted less than 5 minutes. In this case, an officer did not receive alert notifications and did not investigate the matter. This offender subsequently pleaded guilty to raping a child and killing the child’s mother after removing his tracking device. Following this incident, the agency’s national office investigated the supervision of the offender and reaffirmed the importance of receiving immediate notifications for and responding to all tamper alerts. In recognition of the importance of establishing appropriate response protocols, the draft guide includes examples for an inclusion zone violation and a low battery alert. The draft guide also highlights a number of factors that agencies should take into consideration when determining how to respond to alerts. For example, the draft guide advises agencies to consider the offender’s conviction type, level of risk, and whether there are victims who should be notified. The draft guide also advises agencies to consider their available resources when determining who will be notified of alerts and when. Specifically, agencies should determine if they are able to respond to alerts 24 hours a day, 7 days a week, and whether they can use a vendor to monitor or respond to alerts prior to agency staff being notified. Figure 8 shows examples of two alert response approaches—one in which all alerts are received by an officer and one in which a monitoring center reviews alerts to determine whether an officer should be notified. Workload. Implementing an OTS program can create workload challenges. For example, officials we met with said that they have experienced high or unpredictable officer caseloads and the need for overtime to respond to alerts 24 hours a day, 7 days a week. The draft guide asserts that OTS programs need to have sufficient staffing to meet the increased workload demands. It also states that failing to adequately staff an OTS program can lead to officer burnout, unanticipated overtime expenses, high turnover rates, and protests from collective bargaining groups. The draft guide provides information on the multiple new duties that OTS programs may require and that agencies should consider when making decisions about the size and objectives of their program. These duties may include, for example, offender orientation to instruct the offender of program rules and conditions, installation of offender tracking equipment, routine inspection of offender tracking equipment to ensure the offender has not tampered with it, responding to alerts, and reviewing location tracking data. The draft guide also provides information on different approaches and considerations for addressing workload issues. For example, if there is a large enough offender population, the draft guide states that a specialized workforce for offender tracking could result in efficiencies. Hours of operation are another consideration. According to the draft guide, agencies should determine whether OTS alerts should be responded to 24 hours a day, 7 days a week, or if passive tracking is a viable alternative part or all of the time. Data review requirements can also affect officers’ workloads. Reviewing all offender tracking data takes a significant amount of time, but can help officers identify patterns and deviations that warrant further investigation. Thus, the draft guide states that agencies should determine whether a review of all offender tracking data is needed or if responding to alerts is sufficient to achieve program goals. Further, agencies can contract with vendors to provide various levels of services including training, installing and inspecting equipment, responding to certain alerts, dispatching alerts to criminal justice officers, and data analysis. The draft guide advises agencies to take into account both program objectives as well as stakeholder expectations when determining what approach to take. Other considerations. In addition to addressing the challenges raised by officials from the agencies with whom we met, the draft guide also discusses a number of other issues agencies should consider when implementing an offender tracking program. For example, the draft guide provides information on common procurement processes and what to look for in a vendor. It also addresses training issues and provides information on establishing contractual requirements for the vendor to provide training, as well as considerations for training content, format, and frequency. Furthermore, the draft guide discusses several OTS data considerations, including managing data that are evidence related to a crime and data retention issues, such as the data format, how long the data will be kept, and who will have access to the data. Another consideration is measuring offender tracking program outcomes. The draft guide advises that the appropriate approach to measuring success will be determined by the objectives of the program, which can range from reducing overcrowding in correctional institutions to enhancing public safety. Cellular and GPS reception can affect the OTSs’ location accuracy or ability to report location and alerts. Officials from the 10 agencies with whom we met all experienced challenges with cellular and GPS signal reception in certain areas of their jurisdictions. The draft guide provides information and guidance for how to mitigate these challenges. Cellular coverage. OTSs rely on cellular communications to transmit location data; thus an agency will not be able to determine an offender’s location in near real time while he or she is in an area with insufficient cellular coverage. Officials from all 10 criminal justice agencies stated that there are areas in their jurisdictions that lack sufficient cellular coverage to allow devices to perform as designed. The draft guide suggests that agencies inquire about the cellular providers that vendors use for their equipment and test the devices prior to making a final procurement decision. If cellular coverage is limited, the draft guide states that one option is to use passive tracking, where the location and alert status data are transmitted to the agency through a landline at a predetermined interval, usually once a day. GPS signal reception. Signals from a minimum of three GPS satellites are required to calculate location, and the greater number of satellite signals received, the more accurate the location will be. As with cellular coverage, officials from all 10 criminal justice agencies we met with stated that there are areas in their jurisdictions where their OTSs lose or have compromised GPS signal reception. The draft guide provides information on factors that can affect GPS signal reception and cause inaccurate location data—often referred to as GPS drift—to help agencies understand the limitations of OTSs. Specifically, the draft guide notes that structures, foliage, cloud cover, and natural land formations such as canyons can block GPS signals. In addition, buildings or bodies of water can create a phenomenon known as multi-path, where the GPS signal is reflected off one or more surfaces prior to reaching the tracking device. Because GPS calculations usually assume that a signal follows a straight line to the tracking device, multipath reflections can significantly affect the accuracy of the location data. The draft guide also provides information on OTS features that can help mitigate GPS signal reception issues that agencies can consider and test when making equipment selection decisions. For example, OTS with antennas that can track more satellite signals will be less subject to drift and will have greater location accuracy. Table 1 provides further information on different OTS features discussed in the draft guide that can help mitigate GPS signal reception issues. We provided a draft of this report to DOJ, DHS, and AOUSC for their review and comment. None of the agencies provided written comments. DHS and AOUSC provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Attorney General, the Director of the Administrative Office of the U.S. Courts, the Secretary of the Department of Commerce, the Secretary of the Department of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. The National Institute of Justice’s (NIJ) draft standard addresses common operational and circumvention detection needs. Table 2 summarizes some of the operational and circumvention requirements in the draft standard. The agencies’ requirements were sometimes more or less rigorous than those in the standard. Furthermore, in some instances, agencies did not define a performance requirement for a specific operational or circumvention detection need. David C. Maurer, (202) 512-8777 or maurerd@gao.gov. In addition to the contact named above, Joseph P. Cruz (Assistant Director), Pamela Aker, David Alexander, Willie Commons III, Susan Czachor, Dominick Dale, Eric Hauswirth, Heather May, Linda Miller, and Michael Tropauer made key contributions to this report.
OTS is an electronic monitoring technology consisting of hardware, such as an ankle bracelet, used for collecting Global Positioning System (GPS) signals to determine an individual's location, and software for analyzing data collected from the hardware device. While demand for GPS-based electronic monitoring devices has increased, there are currently no standards that OTS devices are required to meet. In 2009, NIJ initiated development of a voluntary OTS standard and companion guide, which is expected to be published no later than March 2016. GAO was asked to review NIJ's approach for developing the OTS standard. This report examines the extent to which (1) NIJ collaborated with stakeholders in developing the standard, and (2) the standard and guide address stakeholder needs and challenges. GAO analyzed NIJ's draft OTS standard, companion guide, and standard development process. To obtain perspectives on the standard development process and OTS needs and challenges, GAO interviewed stakeholders including NIJ officials, practitioners and experts who developed the standard, criminal justice and victims' associations, manufacturers, and officials from a nongeneralizable sample of 10 criminal justice agencies that employ OTS. GAO selected the 10 criminal justice agencies based upon a combination of factors, including ensuring a range of federal, state, and local jurisdictions, among other things. The National Institute of Justice (NIJ) collaborated with a variety of criminal justice and technical experts to develop a draft standard for offender tracking systems (OTS), but earlier involvement of manufacturers could have expedited its development. For example, the committee that developed the draft standard included practitioners spanning all levels of government and program areas such as pretrial, probation, and parole services and technical experts with backgrounds in developing test methods for performance standards. NIJ invited manufacturers to provide input through a workshop held in May 2011 and two subsequent public comment periods. GAO found that earlier and ongoing involvement of OTS manufacturers could have better informed and facilitated development of the OTS standard by, for example, providing insights on OTS capabilities and limitations at the outset. Coordination has improved since 2012, and manufacturers' major concerns have been addressed. Global Positioning System (GPS) Offender Tracking System NIJ's draft OTS standard and guide address many common stakeholder needs and challenges. The draft standard includes requirements for common operational and circumvention detection needs. For example, requirements for location accuracy and the ability to provide alerts when an offender tries to remove the device or is at a prohibited location are included in the standard. In addition, the draft guide provides information and guidance related to challenges identified by the criminal justice agencies GAO met with as well as other considerations for implementing an OTS program. These challenges include misconceptions among the public and victims that OTS allows agencies to prevent bad behavior before it happens; developing appropriate protocols to respond to OTS alerts, such as those for tampering with the tracking device; and workload issues, such as whether there is sufficient staff or resources to respond to OTS alerts 24 hours a day, 7 days a week. In recognition of the range of agencies, resources, and objectives of offender tracking, the guide provides information and guidance, and does not offer “one size fits all” solutions.
USPS faces a dire financial situation and does not have sufficient revenues to cover its expenses, putting its mission of providing prompt, reliable, and efficient universal services to the public at risk. USPS continues to incur operating deficits that are unsustainable, has not made required payments of $11.1 billion to prefund retiree health benefit liabilities, and has reached its $15 billion borrowing limit. Moreover, USPS lacks liquidity to maintain its financial solvency or finance needed capital investment. As presented in table 1, since fiscal year 2006, USPS has achieved about $15 billion in savings and reduced its workforce by about 168,000, while also experiencing a 25 percent decline in total mail volume and net losses totaling $40 billion. As a result of significant declines in volume and revenue, USPS reported that it took unprecedented actions to reduce its costs by $6.1 billion in fiscal year 2009. Also, in fiscal year 2009, a cash shortfall necessitated congressional action to reduce USPS’s mandated payment to prefund retiree health benefits from $5.4 billion to $1.4 billion. In 2011, USPS’s $5.5 billion required retiree health benefit payment was delayed until August 1, 2012. USPS missed that payment as well as the $5.6 billion that was due by September 30, 2012. USPS continues to face significant decreases in mail volume and revenues as online communication and e-commerce expand. While remaining among USPS’s most profitable products, both First-Class Mail and Standard Mail volumes have declined in recent years as illustrated in figure 1. First-Class Mail—which is highly profitable and generates the majority of the revenues used to cover overhead costs—declined 33 percent since it peaked in fiscal year 2001, and USPS projects a continued decline through fiscal year 2020. Standard Mail (primarily advertising) has declined 23 percent since it peaked in fiscal year 2007, and USPS projects that it will remain roughly flat through fiscal year 2020. Standard Mail is profitable overall, but it takes about three pieces of Standard Mail, on average, to equal the profit from the average piece of First-Class Mail. First-Class Mail and Standard Mail also face competition from electronic alternatives, as many businesses and consumers have moved to electronic payments over the past decade in lieu of using the mail to pay bills. USPS reported that for the first time, in fiscal year 2010, fewer than 50 percent of household bills were paid by mail. In addition to lost mail volume and revenue, USPS also has incurred debt, workers’ compensation, and unfunded benefit liabilities, such as pension and retiree health benefits, that totaled $96 billion at the end of fiscal year 2012. Table 2 shows the amounts of these liabilities over the last 6 fiscal years. One of these liabilities, USPS’s debt to the U.S. Treasury, increased over this period from $4 billion to its statutory limit of $15 billion. Thus, USPS can no longer borrow to maintain its financial solvency or finance needed capital investment. USPS continues to incur unsustainable operating deficits. In this regard, the USPS Board of Governors recently directed postal management to accelerate restructuring efforts to achieve greater savings. These selected USPS liabilities increased from 83 percent of revenues in fiscal year 2007 to 147 percent of revenues in fiscal year 2012 as illustrated in figure 2. This trend demonstrates how USPS liabilities have become a large and growing financial burden. USPS’s dire financial condition makes paying for these liabilities highly challenging. In addition to reaching its limit in borrowing authority in fiscal year 2012, USPS did not make required prefunding payments of $11.1 billion for fiscal year 2011 and 2012 retiree health benefits. At the end of fiscal year 2012, USPS had $48 billion in unfunded retiree health benefit liabilities. Looking forward, USPS has warned that it suffers from a severe lack of liquidity. As USPS has reported, “Even with some regulatory and legislative changes, our ability to generate sufficient cash flows from current and future management actions to increase efficiency, reduce costs, and generate revenue may not be sufficient to meet all of our financial obligations.” For this reason, USPS has stated that it continues to lack the financial resources to make its annual retiree health benefit prefunding payment. USPS has also reported that in the short term, should circumstances leave it with insufficient liquidity, it may need to prioritize payments to its employees and suppliers ahead of those to the federal government. For example, near the end of fiscal year 2011, in order to maintain its liquidity USPS temporarily halted its regular contributions for the Federal Employees Retirement System (FERS) that are supposed to cover the cost of benefits being earned by current employees. However, USPS has since made up those missed FERS payments. USPS’s statements about its liquidity raise the issue of whether USPS will need additional financial help to remain solvent while it restructures and, more fundamentally, whether it can remain financially self-sustainable in the long term. USPS has also raised the concern that its ability to negotiate labor contracts is essential to maintaining financial stability and that failure to do so could have significant adverse consequences on its ability to meet its financial obligations. Most USPS employees are covered by collective bargaining agreements with four major labor unions which have established salary increases, cost-of-living adjustments, and the share of health insurance premiums paid by employees and USPS. When USPS and its unions are unable to agree, binding arbitration by a third-party panel is used to establish agreement. There is no statutory requirement for USPS’s financial condition to be considered in arbitration. In 2010, we reported that the time has come to reexamine USPS’s 40-year-old structure for collective bargaining, noting that wages and benefits comprise 80 percent of its costs at a time of escalating losses and a dramatically changed competitive environment. We also reported that Congress should consider revising the statutory framework for collective bargaining to ensure that USPS’s financial condition be considered in binding arbitration. USPS has several initiatives to reduce costs and increase its revenues to curtail future net losses. In February 2012, USPS announced a 5-year business plan with the goal of achieving $22.5 billion in annual cost savings by the end of fiscal year 2016. USPS has begun implementing this plan, which includes initiatives to save:  $9 billion in mail processing, retail, and delivery operations, including consolidation of the mail processing network, and restructuring retail and delivery operations;  $5 billion in compensation and benefits and non-personnel initiatives;  $8.5 billion through proposed legislative changes, such as moving to a 5-day delivery schedule and eliminating the obligation to prefund USPS’s retiree health benefits. Simultaneously, USPS’s 5-year plan would further reduce the overall size of the postal workforce by roughly 155,000 career employees, with many of those reductions expected to result from attrition. According to the plan, half of USPS’s career employees are currently eligible for full or early retirement. Reducing its workforce is vital because, as noted, compensation and benefits costs continue to generate about 80 percent of USPS’s expenses. Compensation alone (primarily wages) exceeded $36 billion in fiscal year 2012, or close to half of its costs. Compensation costs decreased by $542 million in fiscal year 2012 as USPS offered separation incentives to postmasters and mail handlers to encourage more attrition. This fiscal year, separation incentives were offered to employees represented by the American Postal Workers Union (e.g., mail processing and retail clerks) to encourage further attrition as processing and retail operations are redesigned and consolidated to more closely correspond with workload. To accelerate implementation of its plan, in early February 2013, USPS announced plans to transition to a new delivery schedule by early August 2013 that would limit its delivery of mail on Saturdays to mail addressed to Post Office Boxes and to packages. USPS’s operational plan for the new delivery schedule anticipates a combination of employee reassignment and attrition to generate an expected annual cost savings of about $2 billion once its plan is fully implemented. Over the past several years, USPS has advocated shifting to a 5-day delivery schedule for both mail and packages. According to USPS, however, recent strong growth in package delivery—as we will discuss in more detail below—and projections for continued strong package growth throughout the coming decade led to a revised approach to maintain package delivery 6 days per week. Another key area of potential savings included in the 5-year plan focused on reducing compensation and benefit costs. USPS’s largest benefit payments in fiscal year 2012 included:  $7.8 billion in current-year health insurance premiums for employees, retirees, and their survivors (USPS’s health benefit payments would have been $13.4 billion if USPS had paid the required $5.6 billion retiree health prefunding payment);  $3.0 billion in FERS pension funding contributions;  $1.8 billion in social security contributions;  $1.4 billion in workers’ compensation payments; and  $1.0 billion in Thrift Savings Plan contributions. USPS has proposed administering its own health care plan for its employees and retirees and withdrawing from the Federal Employee Health Benefits (FEHB) program so that it can better manage its costs and achieve significant savings, which USPS has estimated could be over $7 billion annually. About $5.5 billion of the estimated savings would come from eliminating the retiree health benefit prefunding payment and another $1.5 billion would come from reducing health care costs. We are currently reviewing USPS’s proposal including its potential financial effects on participants and USPS. To increase revenue, USPS is working to increase use of shipping and package services. With the continued increase in e-commerce, USPS projects that shipping and package volume will grow by 7 percent in fiscal year 2013, after increasing 7.5 percent in fiscal year 2012. Revenue from these two product categories represented about 18 percent of USPS’s fiscal year 2012 operating revenue. However, USPS does not expect that continued growth in shipping and package services will fully offset the continued decline of revenue from First-Class Mail and other products. We recently reported that USPS is pursuing 55 initiatives to generate revenue. Forty-eight initiatives are extensions of existing lines of postal products and services, such as offering Post Office Box customers a suite of service enhancements (e.g., expanded lobby hours and earlier pickup times) at selected locations and increasing public awareness of the availability of postal services at retail stores. The other seven initiatives included four involving experimental postal products, such as prepaid postage on the sale of greeting cards, and three that were extensions of nonpostal services that are not directly related to mail delivery. USPS offers 12 nonpostal services including Passport Photo Services, the sale of advertising to support change-of-address processing, and others generating a net income of $141 million in fiscal year 2011. USPS has also increased its use of negotiated service agreements that offer competitively priced contracts as well as promotions with temporary rate reductions that are targeted to retain mail volume. We are currently reviewing USPS’s use of negotiated service agreements. As USPS attempts to reduce costs and increase revenue, its mission to provide universal service continues. USPS’s network serves more than 152 million residential and business delivery points. In May 2011, we reported that many of USPS’s delivery vehicles were reaching the end of their expected 24-year operational life and that USPS’s financial challenges pose a significant barrier to replacing or refurbishing its fleet. As a result, USPS’s approach has been to maintain the delivery fleet until USPS determines how to address longer term needs, but USPS has been increasingly incurring costs for unscheduled maintenance because of breakdowns. The eventual replacement of its vehicle delivery fleet represents yet another financial challenge facing USPS. We are currently reviewing USPS’s investments in capital assets. We have issued a number of reports on strategies and options for USPS to improve its financial situation by optimizing its network and restructuring the funding of its pension and retiree health benefit liabilities. To assist Congress in addressing issues related to reducing USPS’s expenses, we have issued several reports analyzing USPS’s initiatives to optimize its mail processing, delivery, and retail networks. In April 2012, we issued a report related to USPS’s excess capacity in its network of 461 mail processing facilities. We found that USPS’s mail processing network exceeds what is needed for declining mail volume. USPS proposed consolidating its mail processing network, a plan based on proposed changes to overnight delivery service standards for First- Class Mail and Periodicals. Such a change would have enabled USPS to reduce an excess of 35,000 positions and 3,000 pieces of mail equipment, among other things. We found, however, that stakeholder issues and other challenges could prevent USPS from implementing its plan for consolidating its mail processing network. Although some business mailers and Members of Congress expressed support for consolidating mail processing facilities, other mailers, Members of Congress, affected communities, and employee organizations raised concerns. Key issues raised by business mailers were that closing facilities could increase their transportation costs and decrease service. Employee associations were concerned that reducing service could result in a greater loss of mail volume and revenue that could worsen USPS’s financial condition. We reported that if Congress preferred to retain the current delivery service standards and associated network, decisions will need to be made about how USPS’s costs for providing these services will be paid. In March 2011, we reported on USPS’s proposal to reduce costs by moving from a 6-day to a 5-day delivery schedule. USPS delivers to more than 152 million addresses. USPS also estimated that 5-day delivery would result in minimal mail volume decline. We found that the extent to which USPS can achieve cost savings from this change and mitigate volume and revenue loss depends on how well and how quickly USPS can realign its operations, workforce, and networks; maintain service quality; and communicate with stakeholders. USPS has spent considerable time and resources developing plans to facilitate this transition. Nevertheless, risks and uncertainties remain, such as how quickly USPS can realign its workforce through attrition; how effectively it can modify certain finance systems; and how mailers will respond to this change in service. In April 2012, we reported that USPS has taken several actions to restructure its retail network—which included almost 32,000 postal- managed facilities in fiscal year 2012—through reducing its workforce and its footprint while expanding retail alternatives. We also reported on concerns customers and other stakeholders have expressed regarding the impact of post office closures on communities, the adequacy of retail alternatives, and access to postal services, among others. We discussed challenges USPS faces, such as legal restrictions and resistance from some Members of Congress and the public, that have limited USPS’s ability to change its retail network by moving postal services to more nonpostal-operated locations (such as grocery stores), similar to what other nations have done. The report concluded that USPS cannot support its current level of services and operations from its current revenues. We noted that policy issues remain unresolved related to what level of retail services USPS should provide, how the cost of these services should be paid, and how USPS should optimize its retail network. In November 2011, we reported that USPS had expanded access to its services through alternatives to post offices in support of its goals to improve service and financial performance and recommended that USPS develop and implement a plan with a timeline to guide efforts to modernize USPS’s retail network, and that addresses both traditional post offices and retail alternatives as well. We added that the plan should also include: (1) criteria for ensuring the retail network continues to provide adequate access for customers as it is restructured; (2) procedures for obtaining reliable retail revenue and cost data to measure progress and inform future decision making; and (3) a method to assess whether USPS’s communications strategy is effectively reaching customers, particularly those customers in areas where post offices may close. In November 2012, we reported that although contract postal units (CPUs)—independent businesses compensated by USPS to sell most of the same products and services as post offices at the same price—have declined in number, they have supplemented post offices by providing additional locations and hours of service. More than 60 percent of CPUs are in urban areas where they can provide customers nearby alternatives when they face long lines at post offices. In fiscal year 2011, after compensating CPUs, USPS retained 87 cents of every dollar of CPU revenue. We found that limited interest from potential partners, competing demands on USPS staff resources, and changes to USPS’s retail network posed potential challenges to USPS’s use of CPUs. To assist Congress in addressing issues related to funding USPS’s liabilities, we have also issued several reports that address USPS’s liabilities, including its retiree health benefits, pension, and workers’ compensation. In December 2012, we reported that USPS’s deteriorating financial outlook will make it difficult to continue the current schedule for prefunding postal retiree health benefits in the short term, and possibly to fully fund the remaining $48 billion unfunded liability over the remaining decades of the statutorily required actuarial funding schedule. However, we also reported that deferring funding could increase costs for future ratepayers and increase the possibility that USPS may not be able to pay for some or all of its liability. We stated that failure to prefund these benefits is a potential concern. Making affordable prefunding payments would protect the viability of USPS by not saddling it with bills later on, when employees are already retired and no longer helping it generate revenue; it can also make the promised benefits more secure. Thus, as we have previously reported, we continue to believe that it is important for USPS to prefund these benefits to the maximum extent that its finances permit. We also recognize that without congressional or further USPS actions to align revenue and costs, USPS will not have the finances needed to make annual payments and reduce its long term retiree health unfunded liability. No funding approach will be viable unless USPS can make the required payments. We reported on options with regard to the FERS surplus, noting the degree of uncertainty inherent in this estimate and reporting on the implications of alternative approaches to accessing this surplus. The estimated FERS surplus decreased from 2011 to 2012, and at the end of fiscal year 2012, USPS had an estimated FERS surplus of $3.0 billion and an estimated CSRS deficit of $18.7 billion. In 2012, we reported on workers’ compensation benefits paid to both postal and nonpostal beneficiaries under the Federal Employees’ Compensation Act (FECA). USPS has large FECA program costs. At the time of their injury, 43 percent of FECA beneficiaries in 2010 were employed by USPS. FECA provides benefits to federal workers who sustained injuries or illnesses while performing federal duties, and benefits are not taxed or subject to age restrictions. Various proposals to modify FECA’s benefit levels have been advanced. At the request of Congress, we have provided information to assist it in making decisions about the FECA program. In summary, to improve its financial situation, USPS needs to reduce its expenses to close the gap between revenue and expenses, repay its outstanding debt, continue funding its retirement obligations, and increase capital for investment, such as replacing its aging vehicle fleet. In addition, as noted in prior reports, congressional action is needed to (1) modify USPS’s retiree health benefit payments in a fiscally responsible manner; (2) facilitate USPS’s ability to align costs with revenues based on changing workload and mail use; and (3) require that any binding arbitration resulting from collective bargaining takes USPS’s financial condition into account. As we have continued to underscore, Congress and USPS need to reach agreement on a comprehensive package of actions to improve USPS’s financial viability. In previous reports, we have provided strategies and options, to both reduce costs and enhance revenues, that Congress could consider to better align USPS costs with revenues and address constraints and legal restrictions that limit USPS’s ability to reduce costs and improve efficiency; we have also reported on implications for addressing USPS’s benefit liabilities. If Congress does not act soon, USPS could be forced to take more drastic actions that could have disruptive, negative effects on its employees, customers, and the availability of reliable and affordable postal services. Chairman Carper, Ranking Member Coburn, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. For further information about this statement, please contact Lorelei St. James, Director, Physical Infrastructure, at (202) 512-2834 or stjamesl@gao.gov. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. In addition to the contact named above, Frank Todisco, Chief Actuary; Samer Abbas, Teresa Anderson, Barbara Bovbjerg, Kyle Browning, Colin Fallon, Imoni Hampton, Kenneth John, Kim McGatlin, Amelia Shachoy, Andrew Sherrill, and Crystal Wesco made important contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
USPS is in a serious financial crisis as its declining mail volume has not generated sufficient revenue to cover its expenses and financial obligations. First-Class Mail--which is highly profitable and generates the majority of the revenues used to cover overhead costs--declined 33 percent since it peaked in fiscal year 2001, and USPS projects a continued decline through fiscal year 2020. Declining mail volume is putting USPS's mission of providing prompt, reliable, and efficient universal services to the public at risk. This testimony discusses (1) USPS's financial condition, (2) initiatives to reduce costs and increase revenues, and (3) actions needed to improve USPS's financial situation. The testimony is based primarily on our past and ongoing work and our analysis of USPS's recent financial results. In previous reports, GAO has provided strategies and options that USPS and Congress could consider to better align USPS costs with revenues and address constraints and legal restrictions that limit USPS's ability to reduce costs and improve efficiency. GAO has also stated that Congress and USPS need to reach agreement on a comprehensive package of actions to improve USPS's financial viability. The U.S. Postal Service (USPS) continues to incur unsustainable operating deficits, has not made required payments of $11.1 billion to prefund retiree health benefits, and has reached its $15 billion borrowing limit. Thus far, USPS has been able to operate within these constraints, but now faces a critical shortage of liquidity that threatens its financial solvency and ability to finance needed capital investment. USPS had an almost 25 percent decline in total mail volume and net losses totaling $40 billion since fiscal year 2006. While USPS achieved about $15 billion in savings and reduced its workforce by about 168,000 over this period, its debt and unfunded benefit liabilities grew to $96 billion by the end of fiscal year 2012. USPS expects mail volume and revenue to continue decreasing as online bill communication and e-commerce expand. USPS has reported on several initiatives to reduce costs and increase its revenues to curtail future net losses. To reduce costs, USPS announced a 5-year business plan in February 2012 with the goal of achieving $22.5 billion in annual cost savings by the end of fiscal year 2016. USPS has begun implementing this plan, which includes making changes to its mail processing, retail, and delivery networks and redesigning its workforce in line with changing mail volume. To achieve greater savings, USPS's Board of Governors recently directed postal management to accelerate these efforts. To increase revenue, USPS is pursuing 55 initiatives. While USPS expects shipping and package services to continue to grow, such growth is not expected to fully offset declining mail volume. USPS needs to reduce its expenses to avoid even greater financial losses, repay its outstanding debt, continue funding its retirement obligations, and increase capital for investment, including replacing its aging vehicle fleet. Also, Congress needs to act to (1) modify USPS's retiree health benefit payments in a fiscally responsible manner; (2) facilitate USPS's ability to align costs with revenues based on changing workload and mail use; and (3) require that any binding arbitration resulting from collective bargaining takes USPS's financial condition into account. No one action in itself will address USPS's financial condition; we have previously recommended a comprehensive package of actions. If Congress does not act soon, USPS could be forced to take more drastic actions that could have disruptive, negative effects on its employees, customers, and the availability of postal services. USPS also reported that it would prioritize payments to employees and suppliers ahead of those to the federal government.
In 2000, a report of the Surgeon General noted that tooth decay is the most common chronic childhood disease. Left untreated, the pain and infections caused by tooth decay may lead to problems in eating, speaking, and learning. Tooth decay is almost completely preventable, and the pain, dysfunction, or on extremely rare occasion, death, resulting from dental disease can be avoided (see fig. 1). Preventive dental care can make a significant difference in health outcomes and has been shown to be cost- effective. For example, a 2004 study found that average dental-related costs for low-income preschool children who had their first preventive dental visit by age 1 were less than one-half ($262 compared to $546) of average costs for children who received their first preventive visit at age 4 through 5. The American Academy of Pediatric Dentistry (AAPD) recommends that each child see a dentist when his or her first tooth erupts and no later than the child’s first birthday, with subsequent visits occurring at 6-month intervals or more frequently if recommended by a dentist. The early initial visit can establish a “dental home” for the child, defined by AAPD as the ongoing relationship with a dental provider who can ensure comprehensive and continuously accessible care. Comprehensive dental visits can include both clinical assessments, such as for tooth decay and sealants, and appropriate discussion and counseling for oral hygiene, injury prevention, and speech and language development, among other topics. Because resistance to tooth decay is determined in part by genetics, eating patterns, and oral hygiene, early prevention is important. Delaying the onset of tooth decay may also reduce long-term risk for more serious decay by delaying the exposure to caries risk factors to a time when the child can better control his or her health behaviors. Recognizing the importance of good oral health, HHS in 1990 and again in 2000 established oral health goals as part of its Healthy People 2000 and 2010 initiatives. These include objectives related to oral health in children, for example, reducing the proportion of children with untreated tooth decay. One objective of Healthy People 2010 relates to the Medicaid population: to increase the proportion of low-income children and adolescents under the age of 19 who receive any preventive dental service in the past year, from 25 percent in 1996 to 66 percent in 2010. Medicaid, a joint federal and state program which provides health care coverage for low-income individuals and families; pregnant women; and aged, blind, and disabled people, provided health coverage for an estimated 20.1 million children aged 2 through 18 in federal fiscal year 2005. The states operate their Medicaid programs within broad federal requirements and may contract with managed care organizations to provide Medicaid benefits or use other forms of managed care, when approved by CMS. CMS estimates that as of June 30, 2006, about 65 percent of Medicaid beneficiaries received benefits through some form of managed care. State Medicaid programs must cover some services for certain populations under federal law. For instance, under Medicaid’s EPSDT benefit, states must provide dental screening, diagnostic, preventive, and related treatment services for all eligible Medicaid beneficiaries under age 21. Children in Medicaid aged 2 through 18 often experience dental disease and often do not receive needed dental care, and although receipt of dental care has improved somewhat in recent years, the extent of dental disease for most age groups has not. Information from NHANES surveys from 1999 through 2004 showed that about one in three children ages 2 through 18 in Medicaid had untreated tooth decay, and one in nine had untreated decay in three or more teeth. Compared to children with private health insurance, children in Medicaid were substantially more likely to have untreated tooth decay and to be in urgent need of dental care. MEPS surveys conducted in 2004 and 2005 found that almost two in three children in Medicaid aged 2 through 18 had not received dental care in the previous year and that one in eight never sees a dentist. Children in Medicaid were less likely to have received dental care than privately insured children, although they were more likely to have received care than children without health insurance. Children in Medicaid also fared poorly when compared to national benchmarks, as the percentage of children in Medicaid ages 2 through 18 who received any dental care— 37 percent—was far below the Healthy People 2010 target of having 66 percent of low-income children under age 19 receive a preventive dental service. MEPS data on Medicaid children who had received dental care—from 1996 through 1997 compared to 2004 through 2005—showed some improvement for children ages 2 through 18 in Medicaid. By contrast, comparisons of recent NHANES data to data from the late 1980s and 1990s suggest that the extent that children ages 2 through 18 in Medicaid experience dental disease has not decreased for most age groups. Dental disease is a common problem for children aged 2 through 18 enrolled in Medicaid, according to national survey data (see fig. 2). NHANES oral examinations conducted from 1999 through 2004 show that about three in five children (62 percent) in Medicaid had experienced tooth decay, and about one in three (33 percent) were found to have untreated tooth decay. Close to one in nine—about 11 percent—had untreated decay in three or more teeth, which is a sign of unmet need for dental care and, according to some oral health experts, can suggest a severe oral health problem. Projecting these proportions to 2005 enrollment levels, we estimate that 6.5 million children in Medicaid had untreated tooth decay, with 2.2 million children having untreated tooth decay involving three or more teeth. Compared with children with private health insurance, children in Medicaid were at much higher risk of tooth decay and experienced problems at rates more similar to those without any insurance. As shown in figure 3, the proportion of children in Medicaid with untreated tooth decay (33 percent) was nearly double the rate for children who had private insurance (17 percent) and was similar to the rate for uninsured children (35 percent). These children were also more than twice as likely to have untreated tooth decay in three or more teeth than their privately insured counterparts (11 percent for Medicaid children compared to 5 percent for children with private health insurance). These disparities were consistent across all age groups we examined. According to NHANES data, more than 5 percent of children in Medicaid aged 2 through 18 had urgent dental conditions, that is, conditions in need of care within 2 weeks for the relief of symptoms and stabilization of the condition. Such conditions include tooth fractures, oral lesions, chronic pain, and other conditions that are unlikely to resolve without professional intervention. On the basis of these data, we estimate that in 2005, 1.1 million children aged 2 through 18 in Medicaid had conditions that warranted seeing a dentist within 2 weeks. Compared to children who had private insurance, children in Medicaid were more than four times as likely to be in urgent need of dental care. The NHANES data suggest that the rates of untreated tooth decay for some Medicaid beneficiaries could be about three times more than national health benchmarks. For example, the NHANES data showed that 29 percent of children in Medicaid aged 2 through 5 had untreated decay, which compares unfavorably with the Healthy People 2010 target for untreated tooth decay of 9 percent of children aged 2 through 4. Most children in Medicaid do not visit the dentist regularly, according to 2004 and 2005 nationally representative MEPS data (see fig. 4). According to these data, nearly two in three children in Medicaid aged 2 through 18 had not received any dental care in the previous year. Projecting these proportions to 2005 enrollment levels, we estimate that 12.6 million children in Medicaid have not seen a dentist in the previous year. In reporting on trends in dental visits of the general population, AHRQ reported in 2007 that about 31 percent of poor children (family income less than or equal to the federal poverty level) and 34 percent of low- income children (family income above 100 percent through 200 percent of the federal poverty level) had a dental visit during the year. Survey data also showed that about one in eight children (13 percent) in Medicaid reportedly never see a dentist. MEPS survey data also show that many children in Medicaid were unable to access needed dental care. Survey participants reported that about 4 percent of children aged 2 through 18 in Medicaid were unable to get needed dental care in the previous year. Projecting this percentage to estimated 2005 enrollment levels, we estimate that 724,000 children aged 2 through 18 in Medicaid could not obtain needed care. Regardless of insurance status, most participants who said a child could not get needed dental care said they were unable to afford such care. However, 15 percent of children in Medicaid who had difficulty accessing needed dental care reportedly were unable to get care because the provider refused to accept their insurance plan, compared to only 2 percent of privately insured children. Children enrolled in Medicaid were less likely to have received dental care than privately insured children, but they were more likely to have received dental care than children without health insurance. (See fig. 5.) Survey data from 2004 through 2005 showed that about 37 percent of children in Medicaid aged 2 through 18 had visited the dentist in the previous year, compared with about 55 percent of children with private health insurance, and 26 percent of children without insurance. The percentage of children in Medicaid who received any dental care—37 percent—was far below the Healthy People 2010 target of having 66 percent of low-income children under age 19 receive a preventive dental service. The NHANES data from 1999 through 2004 also provide some information related to the receipt of dental care. The presence of dental sealants, a form of preventive care, is considered to be an indicator that a person has received dental care. About 28 percent of children in Medicaid had at least one dental sealant, according to 1999 through 2004 NHANES data. In contrast, about 40 percent of children with private insurance had a sealant. However, children in Medicaid were more likely to have sealants than children without health insurance (about 20 percent). While comparisons of past and more recent survey data suggest that a larger proportion of children in Medicaid had received dental care in recent surveys, the extent that children in Medicaid experience dental disease has not decreased. A comparison of NHANES results from 1988 through 1994 with results from 1999 through 2004 showed that the rates of untreated tooth decay were largely unchanged for children in Medicaid aged 2 through 18: 31 percent of children had untreated tooth decay in 1988 through 1994, compared with 33 percent in 1999 through 2004 (see fig. 6). The proportion of children in Medicaid who experienced tooth decay increased from 56 percent in the earlier period to 62 percent in more recent years. This increase appears to be driven by younger children, as the 2 through 5 age group had substantially higher rates of dental disease in the more recent time period, 1999 through 2004. This preschool age group experienced a 32 percent rate of tooth decay in the 1988 through 1994 time period, compared to almost 40 percent experiencing tooth decay in 1999 through 2004 (a statistically significant change). Data for adolescents, by contrast, suggest declining rates of tooth decay. Almost 82 percent of adolescents aged 16 through 18 in Medicaid had experienced tooth decay in the earlier time period, compared to 75 percent in the latter time period (although this change was not statistically significant). These trends were similar for rates of untreated tooth decay, with the data suggesting rates going up for young children, and declining or remaining the same for older groups that are more likely to have permanent teeth. According to CDC, these trends are similar for the general population of children, for which tooth decay in permanent teeth has generally declined and untreated tooth decay has remained unchanged. CDC also found that tooth decay in preschool aged children in the general population had increased in primary teeth. At the same time, indicators of receipt of dental care, including the proportion of children who had received dental care in the past year and use of sealants, have shown some improvement. Two indicators of receipt of dental care showed improvement from earlier surveys: The percentage of children in Medicaid aged 2 through 18 who received dental care in the previous year increased from 31 percent in 1996 through 1997 to 37 percent in 2004 through 2005, according to MEPS data (see fig. 7). This change was statistically significant. Similarly, AHRQ reported that the percent of children with a dental visit increased between 1996 and 2004 for both poor children (28 percent to 31 percent) and low-income children (28 percent to 34 percent). The percentage of children aged 6 through 18 in Medicaid with at least one dental sealant increased nearly threefold, from 10 percent in 1988 through 1994 to 28 percent in 1999 through 2004, according to NHANES data, and these changes were statistically significant. The increase in receipt of sealants may be due in part to the increased use of dental sealants in recent years, as the percentage of uninsured and insured children with dental sealants doubled over the same time period. Adolescents aged 16 through 18 in Medicaid had the greatest increase in receipt of sealants relative to other age groups. The percentage of adolescents with dental sealants was about 6 percent in the earlier time period, and 33 percent more recently. The percentage of children in Medicaid who reportedly never see a dentist remained about the same between the two time periods, with about 14 percent in 1996 through 1997 who never saw a dentist, and 13 percent in 2004 through 2005, according to MEPS data. More information on our analysis of NHANES and MEPS for changes in dental disease and receipt of dental care for children in Medicaid over time, including confidence intervals and whether changes over time were statistically significant, can be found in appendixes I and II. The information provided by nationally representative surveys regarding the oral health of our nation’s low-income children in Medicaid raises serious concerns. Measures of access to dental care for this population, such as children’s dental visits, have improved somewhat in recent surveys, but remain far below national health goals. Of even greater concern are data that show that dental disease is prevalent among children in Medicaid, and is not decreasing. Millions of children in Medicaid are estimated to have dental disease in need of treatment; in many cases this need is urgent. Given this unacceptable condition, it is important that those involved in providing dental care to children in Medicaid—the federal government, states, providers, and others—address the need to improve the oral health condition of these children and to achieve national oral health goals. We provided a draft of this report for comment to HHS. HHS provided written comments which we summarize below. The text of HHS’s letter, including comments from CMS, CDC, and AHRQ, is reprinted in appendix III. HHS also provided technical comments, which we incorporated as appropriate. In commenting on the draft, CMS acknowledged the challenge of providing dental services to children in Medicaid, as well as all children nationwide, and cited a number of activities undertaken by CMS in coordination with states, such as completing 17 focused dental reviews and forming an Oral Health Technical Advisory Group. CDC commented that trends in dental caries vary by age group and for primary versus permanent teeth. CDC also noted that beginning in 2005, trained health technologists conducted basic assessments of caries experience. We revised our report to further clarify the differing trends by age groups and to acknowledge the assessments performed by health technologists. We did not analyze the data by both age and dentition (primary versus permanent teeth) due to small sample sizes; we note that the trends for the youngest and oldest age groups in the Medicaid child population that we identified are consistent with those that CDC found in the general population by age and dentition. AHRQ commented that agency staff had completed a Chartbook that summarizes dental use, expenses, dental coverage, and changes from 1996 and 2004 for the general population that was not cited and referenced in our report, and indicated it was unclear why the same analytical approach was not followed for the determination of public coverage status. In technical comments, AHRQ noted that their reported findings are generally comparable to GAO’s findings. We revised our report to cite AHRQ’s findings on dental services for children and to further describe our methodology. Regarding our determination of public coverage status, we did not use AHRQ’s analytical approach that describes “public coverage” because the focus of this report was on children covered by Medicaid. AHRQ’s approach did not distinguish Medicaid from other types of public coverage. We are sending copies of this report to other interested congressional committees and to the Secretary of HHS. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix IV. The National Health and Nutrition Examination Survey (NHANES), conducted multiple times since the early 1960s by the Department of Health and Human Services’ (HHS) National Center for Health Statistics of the Centers for Disease Control and Prevention (CDC), is designed to provide nationally representative estimates of the health and nutrition status of the noninstitutionalized civilian population of the United States. NHANES provides information on civilians of all ages. Prior to 1999, three periodic surveys were conducted. Since 1999, NHANES has been conducted annually. For this study, we examined data from 1999 through 2004 and data from 1988 through 1994. We did not analyze any NHANES data after 2004 because, beginning in 2005, NHANES surveys do not include examinations by dentists for tooth decay, dental sealants, and most other oral health conditions. Our analysis of NHANES data focused on the oral examination of children ages 2 through 18. As part of an overall physical examination, dental examiners inspect children’s mouths and collect data on the number and condition of teeth and the condition of gums. To analyze these data, we considered three categories of children, based on their health insurance status as reported by their parents or guardians on the interview section of the survey: children with Medicaid, children with private health insurance, and children without health insurance. These categories include more than 90 percent of children who were given dental examinations as part of NHANES. We do not present results for children with other forms of government health insurance, such as TRICARE or Medicare, and we do not present results for children whose parents or guardians provided no information on their health insurance status (about 1.5 percent of children fell into this category). For the 1999 through 2004 time period, the Medicaid category includes some children enrolled in the State Children’s Health Insurance Program (SCHIP); we estimate that about 85 percent of the children for that time period were enrolled in Medicaid with the remainder enrolled in SCHIP. To assess the reliability of NHANES data, we interviewed knowledgeable officials, reviewed relevant documentation, and compared the results of our analyses to published data. We determined that the NHANES data were sufficiently reliable for the purposes of our engagement. Using the NHANES data, we analyzed the percentage of children with untreated tooth decay, the percentage of children who had experienced tooth decay, the percentage of children with tooth decay in three or more teeth, and the percentage of children with dental sealants (see tables 1 through 5). We also analyzed the dental examiner’s recommendation for care as the basis for determining whether a child had an urgent need for dental care. For each of these measures, we estimated the percentage, with 95 percent confidence intervals (that is, there is a 95 percent probability that the actual number falls within the lower and upper limits of our estimates), of children in each of the three insurance categories using raw data and appropriate sampling weights. We also used standard errors to calculate if changes from the 1988 through 1994 time period to the 1999 through 2004 time period were statistically significant at the 95 percent level. To estimate the number of children in the Medicaid category with a given condition, we multiplied the calculated percentage by an estimate of the 2005 average monthly enrollment of children ages 2 through18 in Medicaid (20.1 million children). We estimated the 2005 average monthly enrollment of children ages 2 through 18 in Medicaid using CMS statistics, by age group, for children ages 1 through 18 (we reduced this number to account for children age 1 using Census data). Our analysis of the NHANES data was conducted in accordance with generally accepted government auditing standards from December 2007 through September 2008. The Medical Expenditure Panel Survey (MEPS), administered by HHS’s Agency for Healthcare Research and Quality (AHRQ), collects data on the use of specific health services—frequency, cost, and payment. We analyzed results from the household component of the survey, which surveys families and individuals and their medical providers. Our analysis was based on data from surveys conducted in 1996 through 1997 and 2004 through 2005. We used the 1996 through 1997 data because they were the earliest available and we used the 2004 through 2005 data because they were the most current available. The household component of MEPS collects data from a sample of families and individuals in selected communities across the United States, drawn from a nationally representative subsample of households that participated in the prior year’s National Health Interview Survey (a survey conducted by the National Center for Health Statistics at the Centers for Disease Control and Prevention). The household interviews feature several rounds of interviewing covering 2 full calendar years. MEPS is continuously fielded; each year a new sample of households throughout the country is introduced into the study. MEPS collects information for each person in the household based on information provided by one adult member of the household. This information includes demographic characteristics, health conditions, health status, use of medical services, provider charges, access to care, satisfaction with care, health insurance coverage, income, and employment. We analyzed responses to questions on the use of dental care as well as responses to questions on the difficulty accessing needed dental care. As with the National Health and Nutrition Examination Survey (NHANES) data, we analyzed results from children aged 2 through 18 and divided children into three categories on the basis of their health insurance status. Similar to NHANES, the Medicaid category included children enrolled in the State Children’s Health Insurance Program (SCHIP) for the later time period (2004 through 2005 for MEPS). The privately insured category included children with private health insurance, some of whom had dental coverage and others who did not, while the uninsured category comprised children who had neither health insurance nor dental insurance. To determine the reliability of the MEPS data, we spoke with knowledgeable agency officials and reviewed related documentation and compared our results to published data. We determined that the MEPS data were sufficiently reliable for the purposes of our engagement. We analyzed data according to four different questions asked by the MEPS survey (see tables 6 through 9). The questions asked (1) whether the child had seen or talked to any dental provider in a given time period; (2) how often the child got a dental checkup; (3) whether the child had trouble accessing needed dental care; and (4) if the respondent answered yes to the third question, then what the reasons were for having trouble accessing needed dental care. Using sampling weights, we estimated the percentage of children in each category as well as a lower and upper limit of this percentage, calculated at the 95 percent confidence interval. We also used standard errors to calculate if changes from the 1996 through 1997 time period to the 2004 through 2005 time period were statistically significant at the 95 percent level. To estimate the number of children ages 2 through 18 in Medicaid not receiving dental care in the previous year, we calculated the percentage that had not received dental care in the previous year (62.6 percent) and applied this percentage to an estimate of the 2005 average monthly enrollment of children ages 2 through18 in Medicaid (20.1 million children). We estimated the 2005 average monthly enrollment of children ages 2 through 18 in Medicaid using CMS statistics, by age group, for children ages 1 through 18 (we reduced this number using Census data to account for children age 1). Our analysis of the MEPS data was conducted in accordance with generally accepted government auditing standards from December 2007 through September 2008. In addition to the individual named above, Katherine M. Iritani, Assistant Director; Susannah Bloch; Alex Dworkowitz; Erin Henderson; Martha Kelly; Ba Lin; Elizabeth T. Morrison; Terry Saiki; Hemi Tewarson; and Suzanne Worth made key contributions to this report. Medicaid: Concerns Remain about Sufficiency of Data for Oversight of Children’s Dental Services. GAO-07-826T. Washington, D.C.: May 2, 2007. Medicaid Managed Care: Access and Quality Requirements Specific to Low-Income and Other Special Needs Enrollees. GAO-05-44R. Washington, D.C.: December 8, 2004. Medicaid and SCHIP: States Use Varying Approaches to Monitor Children’s Access to Care. GAO-03-222. Washington, D.C.: January 14, 2003. Medicaid: Stronger Efforts Needed to Ensure Children’s Access to Health Screening Services. GAO-01-749. Washington, D.C.: July 13, 2001. Oral Health: Factors Contributing to Low Use of Dental Services by Low- Income Populations. GAO/HEHS-00-149. Washington, D.C.: September 11, 2000. Oral Health: Dental Disease Is a Chronic Problem Among Low-Income Populations. GAO/HEHS-00-72. Washington, D.C.: April 12, 2000. Medicaid Managed Care: Challenge of Holding Plans Accountable Requires Greater State Effort. GAO/HEHS-97-86. Washington, D.C.: May 16, 1997.
In recent years, concerns have been raised about the adequacy of dental care for low-income children. Attention to this subject became more acute due to the widely publicized case of Deamonte Driver, a 12-year-old boy who died as a result of an untreated infected tooth that led to a fatal brain infection. Deamonte had health coverage through Medicaid, a joint federal and state program that provides health care coverage, including dental care, for millions of low-income children. Deamonte had extensive dental disease and his family was unable to find a dentist to treat him. GAO was asked to examine the extent to which children in Medicaid experience dental disease, the extent to which they receive dental care, and how these conditions have changed over time. To examine these indicators of oral health, GAO analyzed data for children ages 2 through 18, by insurance status, from two nationally representative surveys conducted by the Department of Health and Human Services (HHS): the National Health and Nutrition Examination Survey (NHANES) and the Medical Expenditure Panel Survey (MEPS). GAO also interviewed officials from the Centers for Disease Control and Prevention, and dental associations and researchers. In commenting on a draft of the report, HHS acknowledged the challenge of providing dental services to children in Medicaid, and cited a number of studies and actions taken to address the issue. Dental disease remains a significant problem for children aged 2 through 18 in Medicaid. Nationally representative data from the 1999 through 2004 NHANES surveys--which collected information about oral health through direct examinations--indicate that about one in three children in Medicaid had untreated tooth decay, and one in nine had untreated decay in three or more teeth. Projected to 2005 enrollment levels, GAO estimates that 6.5 million children aged 2 through 18 in Medicaid had untreated tooth decay. Children in Medicaid remain at higher risk of dental disease compared to children with private health insurance; children in Medicaid were almost twice as likely to have untreated tooth decay. Receipt of dental care also remains a concern for children aged 2 through 18 in Medicaid. Nationally representative data from the 2004 through 2005 MEPS survey--which asks participants about the receipt of dental care for household members--indicate that only one in three children in Medicaid ages 2 through 18 had received dental care in the year prior to the survey. Similarly, about one in eight children reportedly never sees a dentist. More than half of children with private health insurance, by contrast, had received dental care in the prior year. Children in Medicaid also fared poorly when compared to national benchmarks, as the percentage of children in Medicaid who received any dental care--37 percent--was far below the Healthy People 2010 target of having 66 percent of low-income children under age 19 receive a preventive dental service. Survey data on Medicaid children's receipt of dental care showed some improvement; for example, use of sealants went up significantly between the 1988 through 1994 and 1999 through 2004 time periods. Rates of dental disease, however, did not decrease, although the data suggest the trends vary somewhat among different age groups. Younger children in Medicaid--those aged 2 through 5--had statistically significant higher rates of dental disease in the more recent time period as compared to earlier surveys. By contrast, data for Medicaid adolescents aged 16 through 18 show declining rates of tooth decay, although the change was not statistically significant.
Title II of the Export Enhancement Act of 1992 authorized the creation of an interagency body called the Trade Promotion Coordinating Committee (TPCC) to carry out various duties, including coordinating the export promotion and export financing activities of the U.S. government, ensuring better delivery of services to U.S. businesses, and preventing unnecessary duplication among federal export promotion and export financing programs. The Export Enhancement Act requires this committee to issue an annual report to Congress containing a government-wide strategic plan for federal trade promotion efforts and describing the plan’s implementation. As a result, the TPCC’s National Export Strategy, which articulates U.S. plans to increase exports, is generally issued annually. The President also created the Export Promotion Cabinet and directed it to develop and coordinate the implementation of the NEI, working with the existing TPCC. The Export Promotion Cabinet delivered a September 2010 report to the President with recommendations to implement the goals of the NEI. The report identified several recommendations that support the NEI’s priority of increasing exports by small businesses, including a recommendation to coordinate, expand, and leverage federal outreach resources to identify potential exporters. Some of the approximately 20 TPCC member agencies directly assist small businesses to export overseas, including the SBA, Commerce, Ex- Im, USDA, USTDA, and State. These six agencies are overseen by various congressional committees, including appropriation and authorizing committees (see table 1). In 1993, the TPCC recommended that three agencies—the SBA, Commerce, and Ex-Im—colocate their staff at a domestic network of selected U.S. Export Assistance Centers (USEAC). These “one-stop shops” were to provide coordinated export training, trade leads, export finance, and counseling to U.S. businesses interested in becoming exporters. According to TPCC data, SBA’s funding for export promotion activities has increased substantially in recent years, although SBA’s funding remains a relatively small share of overall federal export promotion funding. For example, TPCC data showed that SBA received about $4 million for export promotion activities in fiscal year 2006 and $5.2 million in fiscal year 2010 and requested $6 million and $6.4 million, respectively, for fiscal years 2011 and 2012. While the fiscal year 2012 request represents an increase of 60 percent from its fiscal year 2006 funding levels, SBA’s funding represents less than 1 percent of the total export promotion funding for the six key agencies that support small business exporting. According to the TPCC, the six key U.S. agencies requested over $1 billion for trade promotion activities in fiscal year 2012, and SBA requested $6.4 million, as shown in table 2. TPCC member agencies may define trade promotion differently. For example, State’s budget includes funding for all State business and economic activities because, according to State, those activities could contribute to enhancing U.S. trade. Conversely, the budget amount for SBA only includes funding for OIT even though other SBA entities, such as the SBDCs, may devote substantial amounts of time to export promotion. Therefore, the reported budget amounts may not always reflect each agency’s total level of activity relating to export promotion or each agency’s actual contribution toward increasing U.S. exports. Furthermore, the export promotion amounts in table 2 do not differentiate activities directed toward small businesses from those directed to larger businesses. All of the agencies other than SBA work with businesses of all sizes, which creates difficulties when making comparisons with SBA about export promotion assistance specifically focused on small businesses. Within SBA, OIT has primary responsibility for export promotion. OIT’s field staff of Export Finance Specialists are colocated with Commerce in 19 USEACs throughout the United States, with one OIT staff per USEAC. OIT field staff engage in a variety of activities but are primarily responsible for providing technical assistance, outreach, and training on SBA’s export finance programs. Separate from OIT, SBDCs are nonfederal partner entities partially funded by SBA that provide a wide range of business services, such as assisting small businesses with financial and marketing advice and tools, primarily through one-on-one counseling services. The over 900 SBDCs are organized into networks under 63 lead centers that largely correspond with state boundaries except in California and Texas, which have multiple lead centers. In addition to providing general business services, SBDCs may also help businesses interested in exporting, particularly those that are new to exporting and need assistance with preparing their business to export. For example, SBDCs may assist a company in developing an international business plan. While most SBDCs provide export assistance as one of many services, SBA has designated some SBDCs as International Trade Centers that focus primarily on exporting. One person in each of SBA’s 68 District Offices is designated as a District International Trade Officer and provides basic export assistance as a collateral duty, but these officers spend most of their time on nonexport- related activities. The District International Trade Officers are managed by a separate SBA office, SBA’s Office of Field Operations, and they are not usually colocated at USEACS. SBA intended that District International Trade Officers only spend about 15 percent of their time on export promotion responsibilities. These responsibilities include organizing and attending outreach events and fielding export-related questions from small businesses directed to the District Office. SBA’s increased activities in promoting small business exports take place within a complex, multiagency environment. The responsibility to provide export assistance to small businesses is shared by six key federal agencies, which engage in four major activities to promote small business exports. The NEI identifies these primary activities for U.S. agencies involved in small business promotion, which we classify under four general terms: Outreach: Identify small businesses that can begin or expand Counseling and Training: Prepare small businesses to export Trade Leads: Connect small businesses to export opportunities Financing: Support small businesses once they have export These activities are dispersed across the six key agencies, as shown in table 3 and explained in further detail below. All six of the key federal agencies and their partner entities that support small business exporting participate in outreach and education to help prepare small businesses to export successfully. Additionally, each agency and partner entity engages in at least one other primary activity to assist small businesses with exporting. In addition to conducting outreach, SBA provides counseling and training, primarily through SBDCs, and assists small businesses with export financing through OIT. All six of the federal agencies that support small business exports conduct outreach, which can include any activity in which agencies seek to inform small businesses or other partners, such as lenders, about the export promotion services offered by the federal government. For this reason, outreach can be difficult to separate from other activities that are part of the agencies’ export promotion effort. For example, a Commerce trade specialist could attend a trade show to meet with and counsel a number of existing clients, but there she would also have the opportunity to meet with small businesses to identify potential clients interested in exporting. As another example, seminars and training sessions, which may be useful for existing exporters or other clients seeking to improve their operations, can also serve an outreach function because they often help introduce small businesses to exporting and help them identify whether they are ready to begin exporting. Another important outreach tool for export assistance agencies is the Export.gov portal. Designed as a “one-stop shop” for companies interested in exporting, Export.gov provides content from multiple TPCC agencies in a single website. In September 2010, Commerce, working with other agencies, revamped the portal to include two new service channels, named “Begin Exporting” and “Expand Your Exports,” that are designed to provide tailored information based on whether or not a company currently exports. Companies can use Export.gov to find webinars or in-person training or to find local resources that can help with export counseling or financing. Commerce intends to roll out a new version of Export.gov that allows for more tailored targeting of information in January 2013. SBA (through SBDCs), Commerce, and USDA all provide one-on-one counseling services to small businesses. Counseling is specific to the needs of each business and can cover a variety of topics relating to international trade and exporting, such as helping a business identify a target export market or discussing logistics for shipping exported goods. USDA’s services, many of which are delivered through nonfederal partner entities called State Regional Trade Groups, are targeted specifically at businesses that export agricultural products.SBDCs offer counseling to businesses of all types. As we have previously reported, export promotion counseling is often labor intensive and time consuming because agency staff must spend a significant amount of time working with a company before it is able to successfully export. company for exporting if it has never exported before. According to Commerce estimates, it can take 2 years or more from the time a company new to exporting begins to receive assistance until that business can make a successful export sale. Even companies with experience exporting may require over a year of preparation before being able to expand into a new market. GAO, National Export Initiative: U.S. and Foreign Commercial Service Should Improve Performance and Resource Allocation Management, GAO-11-909 (Washington, D.C.: Sept. 29, 2011). needs. For example, Commerce officials noted that Portland’s SBDC offers intensive export training through a 10-session course that takes place over 10 months. On the other hand, Commerce and the area’s District Export Council offer an Export University course that is specifically designed as a 1-day course. While SBA typically does not provide much assistance with trade leads— connecting small businesses with overseas buyers—a few of the SBDC counselors we met with said they sometimes try to help clients identify specific opportunities with overseas partners. Additionally, about 3 years ago, OIT started a matchmaker program in which OIT hosts or co-sponsors events. The matchmaker events are designed to help small businesses interested in exporting meet with export management companies, which are intermediaries that represent a company’s product overseas and reduce some of the risks associated with exporting by managing the logistics of the process. Unlike programs sponsored by other U.S. agencies, SBA does not directly connect companies with potential buyers through this matchmaking program. However, OIT’s matchmaking events could potentially result in more trade leads for small businesses because export management companies may be able to help identify foreign buyers. Other agencies connect clients with trade leads in a variety of ways. For example, USDA’s Market Access Program provides funding that helps businesses attend trade shows, where they have opportunities to meet directly with potential buyers. Commerce leverages its international network of Commercial Officers to help U.S. businesses set up appointments with potential buyers abroad and to notify businesses when foreign buyers are looking to import U.S. goods. State Foreign Service Officers sometimes serve a similar role at overseas posts where Commerce does not have a presence. USTDA helps businesses identify export opportunities by providing financial assistance through grant and contract opportunities for U.S. businesses to conduct feasibility studies, pilot projects, and technical assistance, as well as hosting reverse trade missions, in which USTDA brings foreign buyers to the U.S. to help connect them with U.S. companies, both large and small. SBA, Ex-Im, and USDA offer various forms of financing assistance to small businesses, as outlined in table 4. SBA and Ex-Im both offer loan guarantees, which help businesses secure financing from private lenders, for working capital loans. Working capital loans may be used by businesses to finance activities that will help complete export sales. For example, a business might use a working capital loan to buy raw materials that it will use in manufacturing goods for export. SBA also offers guarantees on facilities development loans, which can be used to acquire or upgrade facilities and equipment used to produce goods or services involved in international trade. USDA offers loan guarantees to facilitate international sales of U.S. agricultural products, and Ex-Im provides direct loans to international buyers of U.S. exports, both of which allow U.S. companies to secure export orders and may result in larger orders. Ex-Im also offers insurance on export sales. SBA and Ex-Im both have finance specialists in domestic field offices who provide one-on-one technical assistance on export financing to small businesses and lenders. For example, they may counsel small businesses about different options for financing export sales. Agencies that provide loan guarantees and export-credit insurance have a financial impact that can far exceed their budget. Total SBA and Ex-Im financing for small businesses has increased in recent years, from just over $5 billion in fiscal year 2009 to nearly $7 billion in fiscal year 2011. In fiscal year 2011, Ex-Im approved 3,247 small business transactions, while SBA approved 1,547 small business loan guarantees. The majority of Ex-Im transactions, or over 2,600 transactions worth approximately $3.3 billion, were for export-credit insurance, while all SBA financing is for loan guarantees. SBA, Commerce, and Ex-Im collaborate on some export promotion activities in headquarters and at field locations, but some services overlap, which can be confusing for small businesses and may not be an optimal use of resources. Some SBDCs and Commerce staff offer similar counseling services, and SBA and Ex-Im offer similar financial products to small businesses. SBA has collaborated with other agencies to develop a joint strategy for increasing small business exports and to include collaborative efforts in performance evaluations. However, SBA and other agencies have not clearly defined agencies’ roles and responsibilities for export promotion, nor have they fully leveraged resources such as by regularly sharing client information, where possible. SBA export assistance efforts overlap with those of other agencies, primarily with the export counseling offered by Commerce and export financing products offered by Ex-Im; this overlap can create confusion for small businesses seeking export assistance. Overlap occurs when federal agencies have similar goals, engage in similar activities to achieve them, and target similar beneficiaries. When agencies provide overlapping services to the same clients, there are potential resource inefficiencies, and agencies could potentially focus their resources and strengths to better target clients and address identified needs. SBA’s SBDCs and Commerce both provide one-on-one export counseling to small businesses that are new to exporting or currently exporting. Export counseling is tailored to the needs of each company, so the issues covered vary by client, but SBDCs and Commerce may cover similar issues. For example, both offer strategic advice to help companies identify target markets for exporting, assist companies in understanding and ensuring they are compliant with exporting regulations, and develop seminars to teach small businesses about the fundamentals of exporting. Not all services provided by SBDCs and Commerce overlap. For example, SBDCs help companies develop business plans or conduct financial analysis, services that Commerce does not provide. Commerce also leverages its overseas offices to provide more in-depth fee-based services such as its Gold Key service, which incorporates market research, in-person counseling, and personalized appointments with potential partners overseas. SBA and Ex-Im offer similar export financing products to small businesses, which are delivered through some of the same lending institutions. SBA and Ex-Im export working capital loan programs have many similar features, as shown in table 5. SBA and Ex-Im officials noted, however, that each program has limitations for eligibility, so companies may be able to use only one agency’s product. For example, Ex-Im generally requires that more than 50 percent of the content of an exportable good guaranteed through its Working Capital Guarantee Program originate in the United States. SBA’s Export Working Capital Program has no similar content requirement, so companies using foreign materials may prefer SBA loan guarantees. On the other hand, SBA can only guarantee a maximum loan of $5 million, so companies seeking larger loan guarantees may prefer Ex-Im. Additionally, since both products are loan guarantees, customers must still receive the actual loan through a private lender. SBA and Ex-Im officials in the field noted that the agencies’ express loan programs also share some similarities. In 2012, Ex-Im introduced its Global Credit Express program as a pilot, which is designed to provide a relatively fast infusion of capital to small business exporters and is similar to SBA’s Export Express program, as outlined in table 6. Ex-Im’s Global Credit Express program is still smaller in terms of volume—according to an Ex-Im official, it had resulted in fewer than 10 loans as of November 2012—and some details of the program are evolving, such as the options for lending institutions to refer clients to Ex-Im. However, both products are available for a variety of business activities that will support a company’s export development. In addition to Ex-Im’s restrictions regarding U.S. content of exported goods, one major difference between the two products is that SBA’s Export Express is a guarantee for a loan provided by a private lender, while Ex-Im’s Global Credit Express is a direct loan from Ex-Im to a small business. Ex-Im officials also noted that Ex-Im charges significantly higher fees for Global Credit Express than SBA does for Export Express to ensure the programs do not compete. Additionally, SBA officials noted that there are substantial differences in the duration of each loan, with the Export Express program offering either a revolving line of credit with a maturity of up to 7 years or a term loan with a maturity of up to 25 years, while Global Credit Express has a maturity of up to 1 year. Overlapping services can be confusing for small businesses and may result in an inefficient use of government resources. Both agency officials and private sector representatives said overlapping services can make navigating the federal export assistance system difficult. According to SBA, SBDC, Commerce, and Ex-Im officials, small businesses typically do not know which services each agency provides or where to go for assistance. Private sector representatives agreed it is challenging for small businesses seeking export assistance to determine which federal entity would best serve their needs. They noted, for example, that export financing assistance is very important for small businesses to be competitive in international markets, but it can be difficult to understand the differences between federal loan programs for financing exports. Additionally, as we have noted in the past, overlapping federal efforts can result in an inefficient use of government resources. By addressing such inefficiencies, agencies could more effectively target government resources toward accomplishing the NEI goal. In prior work, we identified practices that can help enhance and sustain collaboration among federal agencies and thereby maximize performance and results, and we have recommended that agencies follow them. These collaborative practices include among them establishing joint strategies, reinforcing individual accountability for collaborative efforts, determining roles and responsibilities, and leveraging resources, as described in table 7. SBA and other agencies have developed a joint annual strategy at the headquarters level to work collaboratively toward the NEI goal of doubling U.S. exports by the end of 2014. The 2011 National Export Strategy (Strategy), drafted by SBA, Commerce, Ex-Im, and other TPCC agencies, was the first time that the agencies developed common metrics for measuring the federal government’s export-promotion and trade-access impacts as a whole, rather than highlighting individual agencies’ successes. Joint performance measures include the number of small business exporters assisted by U.S. government finance programs, the value of exports supported by counseling, and the value of exports supported by financing assistance. The Strategy also discusses progress with regard to NEI priorities, including exports by small businesses and other areas such as trade access. While the Strategy does not identify areas of overlap in export promotion services across agencies, it distinguishes the different types of expertise and assistance needed by small businesses that have never exported (new-to-export) compared with those that have exporting experience and are expanding into new markets (new-to-market). The Strategy also notes efforts by SBA to track data on new-to-export companies and a Commerce initiative to target new-to-market companies. SBA also has agency-wide and OIT- specific plans that address agency goals. Both plans are generally consistent with the NEI goal. While the Strategy was created at the headquarters level, much of its implementation takes place in the field. The NEI created an overarching goal for the different trade agencies and guides the Strategy, but agency officials at the headquarters and field levels differed in their views about the NEI’s impact on collaboration. SBA and other agency officials at the headquarters level noted improvements in collaboration as result of the NEI, such as interagency communication and coordination of events, which are discussed in the Strategy. However, SBA staff and other agency staff we interviewed in some field locations noted that while the NEI has increased public awareness of federal export assistance activities, it has had limited effect on the extent to which the agencies collaborate. In its report that reviewed coordination among USEACs and federal and nonfederal partners with respect to NEI priorities, the Commerce Office of the Inspector General similarly noted that Commerce actions to implement the NEI have had a limited effect on the extent and quality of collaboration with such partners. SBA and other agencies have taken steps to implement the best practice of including collaborative efforts in performance evaluations of export assistance staff; however, the export promotion assistance entities vary in how they include collaboration efforts in their performance standards and measures. For example, OIT performance standards include staff’s participation in activities and events in conjunction with other export agencies. OIT officials also noted that beginning in fiscal year 2013, they will track the number of export credit insurance referrals made by OIT staff to Ex-Im staff. SBDCs’ performance measures focus on the clients and jobs for which SBDC staff provide assistance, but officials told us that SBA is beginning to track SBDCs’ collaborative efforts with other agencies. While Commerce staff’s performance evaluations note services provided by partner agencies, Commerce field staff told us that generally the key incentive for Commerce staff is to conduct services that help facilitate an export sale. Ex-Im’s performance metrics encourage collaboration because they allow for staff to count financial referrals to other agencies toward their own performance goals. An Ex-Im official noted that if an Ex-Im employee determines that an SBA financial product is more appropriate for a client and the referral results in a completed transaction, the Ex-Im staff may choose to count the sale of the SBA product toward individual performance goals. The NEI states that agencies should use such incentives to encourage employees to direct companies to the best option for financing even if the company is sent to another agency. SBA and other agencies have not developed guidance on roles and responsibilities to address overlapping counseling and financing functions. To implement this best practice in collaboration more effectively, SBA and other agencies providing similar export assistance to small businesses could clarify which export agency will serve certain export functions or types of clients. Officials of SBA and the other agencies have not formalized a process for determining how to direct clients with differing needs and levels of exporting experience to the most appropriate agency. First, SBA and Commerce have not clearly defined each agency’s role in counseling small business exporters. SBDC and Commerce officials we spoke with indicated that they try to limit overlap by focusing on the areas where each entity has relatively more experience. Headquarters SBA and Commerce staff stated that SBDC counselors are expected to work with new-to-export companies while Commerce trade specialists should focus on new-to-market companies. According to Commerce officials, Commerce prefers to work with new-to-market businesses or businesses looking to export more in a market where they already export because those businesses can quickly take advantage of Commerce’s extensive services and overseas resources. Commerce field staff are supposed to refer new-to-export businesses to SBDCs, where these businesses may benefit from an array of general business development services. According to Commerce officials, when Commerce staff focus on new-to- market businesses and send new-to-export businesses to SBDCs, the result is a more efficient use of Commerce resources than if Commerce staff were to focus on counseling new-to-export businesses. While some SBA and Commerce officials explained the division of responsibilities for assisting new-to-export versus new-to-market businesses, we found that the division of responsibilities between the SBDCs and Commerce is not clearly defined in practice. Moreover, the agencies have not developed guidance that outlines a common understanding about where clients should be directed based on their export readiness. SBDC and Commerce field staff indicated that interagency roles and responsibilities for counseling new-to-export and new-to-market companies are unclear. SBDC and Commerce staff at all six field locations we visited said that they counsel both new-to-export and new-to-market businesses, and that both agencies at their location may provide the same type of counseling. For example, sometimes both SBDCs and Commerce specialists at some locations provide market research for clients. SBDC and Commerce staff in some locations also noted that they may counsel the same client, but they do not regularly discuss with one another what services they provide to clients, nor do they regularly share client information. Some Commerce field staff also said that they had little interaction with their SBDC counterparts. Additionally, some Commerce field staff said that they continue to work with new-to-export clients that could help them meet their individual performance targets for export successes. SBA and Commerce officials informed us that they had not developed formal agency or interagency guidance that directed SBDC counselors and Commerce staff to primarily counsel one type of client. Second, SBA and Ex-Im have similar roles and responsibilities for similar financial products for small businesses. According to SBA and Ex-Im, the overlap in their financial products results from both agencies’ efforts to respond to lender preferences. Ex-Im officials said that many lenders prefer to work with only one agency and very few lenders use both agencies’ products, so clients may only be able to access one agency’s products through their regular bank. SBA designates Preferred Lenders for its Export Working Capital loan guarantee with authority to process these loans without prior SBA review, and the SBJA has a provision that makes lenders participating in Ex-Im’s similar Delegated Authority Program eligible to participate in SBA’s Preferred Lenders program. However, both SBA and Ex-Im field staff noted that lenders primarily prefer to work with only SBA or Ex-Im, due to the time and expertise needed to learn each agency’s complex requirements and to process each agency’s products. If a potential export financing client only meets the eligibility requirements for one agency’s product and the client’s lender does not work with that agency, the client would need to find a new lender to receive the agency’s loan guarantee. SBA and Ex-Im may be able to explore options to better align export financing products and to assist lenders in more easily adapting to the rules for both SBA and Ex-Im products. Third, the roles and responsibilities of SBA’s 68 District Offices and their relationship to other export promotion entities have also been unclear and are in transition. The Strategy envisioned that District International Trade Officers would lead Export Outreach Teams that would coordinate activities with Commerce at the local level. SBA officials noted that this had not happened because these district officers are still learning their responsibilities in assisting exporters, and their role is still evolving. Furthermore, some Commerce staff expressed uncertainty about the roles and responsibilities of the designated District International Trade Officers and had little or no interaction with them. We found the level of export promotion activities carried out by SBA district offices varied widely. For example, one District International Trade Officer with substantial personal experience with export-related issues was heavily involved in a wide variety of activities, including organizing events and providing one-on-one counseling to businesses interested in exporting. In contrast, other District International Trade Officers said that they had little or no experience with exporting and that their export-related activities are limited to serving as a point of contact within the district office and referring businesses to the appropriate federal entity. While SBA and other agencies take some steps to leverage interagency resources both in headquarters and in the field, they do not regularly share client information, where possible, which may result in less effective client services. One best practice in collaboration emphasizes that agencies should leverage resources—such as human, information technology, physical, and financial resources—to support the common outcome established by the agencies’ joint strategy. At the headquarters level, we found several instances showing that SBA and other agencies have taken steps to effectively leverage one another’s export-promotion resources, as the following examples illustrate: OIT chairs the TPCC’s Small Business Working Group, which coordinates interagency cooperation on small business export promotion as part of the NEI. Agencies discuss coordination of small business exporting issues through this working group, including the issue of streamlining initial client intake across agencies to help agencies provide more targeted assistance to companies. Agencies have also produced interagency outreach materials on federal export financing services through this working group. For example, the working group distributed a brochure on export financing and foreign investment finance programs available through SBA, Ex- Im, and USTDA. SBA and Commerce staff in headquarters coordinate client intake through the Export.gov website and are working to enhance the site to help companies obtain more targeted assistance from agencies. Currently, registrants of Export.gov self-identify their export readiness; a Commerce staff downloads registrants’ information from Export.gov and sends the information from new-to-export companies to OIT in SBA headquarters and directs new-to-market companies to Export.gov resources. Commerce and SBA officials told us they are working together to develop a new version of Export.gov, expected to be completed by January 2013, that is intended to improve upon the current design to better direct registrants to the appropriate export promotion entity and nearest geographical locations. These officials told us that the new version of Export.gov will ask the registrant a series of questions to determine the company’s extent of business development and export readiness, and direct the company to the agencies or entities that could best assist it. At the field level, SBA and other agencies leverage resources by conducting joint outreach and training events. At all six locations we visited, SBA and other agencies invite one another to various events, including trade shows, road shows, and training events that inform small businesses about available federal services. For example, in one location, staff told us that at trade show events, SBA, SBDCs, Commerce, and Ex- Im may conduct joint seminars, where Commerce staff could discuss marketing and sales, while SBA and Ex-Im staff discuss financing. In another location, Ex-Im staff said they may include partners such as SBDCs in road shows they conduct. Although we found that SBA and other agencies leverage interagency resources to some extent both in headquarters and in the field, we also found that SBA and other agencies could better leverage resources by sharing client information more consistently. In 2010, the Export Promotion Cabinet recommended strengthening interagency information sharing and coordination to implement the NEI. The extent to which SBA and other agencies share exporters’ information on a regular basis varies. Commerce and Ex-Im have an informal agreement to share nonbusiness confidential client information on a quarterly basis. Commerce shares the name and contact information for clients that have purchased certain Commerce products, such as a Gold Key Service, and successfully exported. Ex-Im shares a list of new export credit insurance clients with Commerce but does not include client contact information. By contrast, SBDC counselors generally cannot share specific client information with other entities unless they receive permission from the client, and OIT does not regularly share its client list with SBDCs, Commerce, or Ex-Im, nor does it regularly receive client lists from other entities. OIT officials noted that currently, OIT field staff may share information about clients with other agencies informally, such as by engaging in joint client phone calls with other agencies’ staff at the USEACs. Agency officials noted that information sharing is limited by certain privacy restrictions. SBA and other agencies’ officials told us they are currently reviewing the types of information that they could share with each other. Some SBA, SBDC, Commerce, and Ex-Im staff in the locations we visited told us that obtaining access to agencies’ client information would be beneficial. For example, such access could help increase their own clientele base and potentially provide small businesses with assistance in their area of expertise, as well as track the status of clients in the export life cycle. Commerce staff in headquarters and the field also noted that access to OIT client lists could improve Commerce’s ability to report export successes, thereby helping Commerce track their impact in helping increase small business exports. The SBJA expanded SBA’s presence in providing export assistance to small businesses, by requiring an increase in the export training of SBDC staff and an increase in the number of OIT field staff. SBA has made progress in certifying SBDC staff by SBA’s own established target date. However, SBA did not meet the OIT field staff levels within the deadline set by the SBJA, citing hiring and funding challenges. SBA’s most recent plan to increase OIT field staff does not provide funding information for the new positions and updated time frames for filling them. According to SBA, the SBJA is the most significant piece of small business legislation in over a decade because it provides resources to help small businesses continue to drive economic recovery and create jobs. Among other changes, the SBJA increased the maximum size of SBA’s export loan and loan guarantee amounts and elevated the importance of OIT within SBA by making it an independent office. In supporting NEI goals, the law also increased SBA’s staff to provide additional export counseling resources to promote small business exporting. The SBJA included the following requirements for SBA with regard to export promotion staffing and training: Small Business Development Centers training/certification. At 63 lead SBDCs, SBJA required that five staff or 10 percent of staff, whichever is less, obtain certification in providing export assistance. SBA provides that certification can be obtained through an exam or a specific professional certification program Office of International Trade field staff levels. SBJA required that by December 27, 2010, the number of OIT export financial specialists at USEACs would be at least the same as the number of these staff in 2003, The law required that there should be which was 22, according to SBA. at least 3 of these OIT staff in each of the 10 SBA regions by September 27, 2012, which would require a minimum of 30 staff at USEACs nationwide. In addition, SBJA stipulated that SBA should place priority in certain locations and then strategically assign staff to the USEACs based upon the needs of exporters. The export and trade certification program for SBDC staff is intended to greatly expand the number of qualified small business counselors available to help small businesses to engage in international trade and to provide consistency in the quality of assistance across the SBDC networks. In interpreting the SBJA requirement for certifying staff at the 63 lead SBDCs, SBA determined that the certification standard would be based on the total number of staff in each of the 63 SBDC networks and not merely on the number of staff at the lead SBDCs. The certification program would encompass staff at any of the over 900 SBDCs that provide services to small businesses. According to SBA, this interpretation of the certification requirement helps ensure that all of the 63 SBDC networks have a minimum number of qualified counseling and training staff available to provide export and trade assistance to their small business clients. Although the SBJA did not specify a time frame to meet this requirement, SBA established its own agency policy to complete the certification requirement by December 31, 2013. The Certified Global Business Professional program is an internationally recognized independent certification for advanced proficiency in global trade assistance, available through training offered by a third party, such as a community college. according to these officials, while others had less experience and acknowledged the need for additional training. Field staff at the locations we visited informed us that they had either obtained or were in the process of obtaining certification. The SBJA directed OIT to increase field staff in two phases, and to distribute staff regionally. In an attempt to meet the SBJA requirement of increasing its OIT field staff levels from 18 to 22 at USEACs by the end of 2011, SBA advertised 4 temporary positions in 4 specific locations but filled only 2 positions by June 2011 using SBJA funding that expired on September 30, 2012. The 4 locations were based on SBA’s analysis of locations that lacked OIT staff since 2003 and had exporters with the greatest needs. With the expiration of the SBJA funding, SBA stopped advertising the remaining 2 positions left unfilled. Although SBA needed to increase its field presence to 30 OIT staff by September 27, 2012, SBA had not hired any more than the 2 additional OIT staff and had a total of 19 OIT staff in USEACs. Furthermore, despite the SBJA’s requirement for 3 OIT staff to be placed in each of SBA’s regions, there are currently only 2 out of 10 SBA regions with 3 or more OIT staff, while the other 8 regions do not have the required staff levels. See figure 1 for a map of the current number of SBA OIT staff by SBA region. According to SBA officials, SBA encountered hiring challenges that hindered it from filling the four temporary, 13-month term positions it advertised. The main difficulties were that the positions offered were not permanent positions and required specialized trade finance expertise, which contributed to a shortage of qualified candidates. SBA officials said that there were no qualified applicants for the OIT position in one location and few interested and qualified candidates in the other locations. Applicants for the temporary OIT positions that SBA advertised were required to have 1 year of specialized experience to minimally qualify for the position. SBA also considered qualifications such as knowledge of SBA’s trade finance programs; knowledge of advanced concepts, principles, and practices of international trade; and the ability to underwrite export trade finance transactions. Citing difficulties with finding qualified staff for the four advertised OIT positions, and not having continued funding to hire more staff, SBA never advertised for the additional eight OIT positions it would have had to fill, in addition to the original four, to meet the SBJA field staffing requirement. SBA officials noted that they still have the goal to hire additional OIT staff, and said they intend to implement the requirement to hire up to 30 OIT staff after fiscal year 2012, if SBA has sufficient funds available for that purpose. With the expiration of SBJA funding at the end of September 2012, SBA officials told us that they no longer have the funding to hire additional OIT staff. SBA received $26.5 million in SBJA funds to implement SBJA requirements, according to SBA officials. Since OIT positions required by the SBJA were not filled, SBA used the funds to meet other SBJA requirements, such as hiring staff in offices other than OIT and, within OIT, hiring an Assistant Administrator and preparing export-related reports, both as required by the SBJA. SBA developed a plan to increase its OIT staff levels to 30 staff and proposed—in a report submitted to Congress in September 2012—to place the staff in specific USEACs. This proposed allocation of staff does not match the one specified in the SBJA requirement. While the SBJA required 3 OIT field staff in each of SBA’s 10 regions, SBA’s proposal for staff allocation was a minimum of 2 staff in some SBA regions and 5 staff in 1 region. SBA explained that its staffing allocation proposal reflected optimal distribution to support exporters’ needs based on available full- year trade data from U.S. Census and Commerce. The SBA OIT field staffing plan noted that the underlying logic of its proposed allocation was that the more exporters there were within a state, the more opportunities SBA staff would have to directly interact with and assist them. SBA’s plan lacks some critical updated information. The plan listed the proposed USEAC locations and the SBA regions where OIT staff would be located, as well as the number of SBA exporters and percentage of small business exporters in each of SBA’s 10 regions. It also noted that the SBJA funding for the OIT positions would expire on September 30, 2012, but did not discuss how SBA intended to fund the future new positions to meet the required staffing levels. Although the report was submitted to Congress only weeks before the SBJA deadline to meet OIT’s staffing requirement, the plan also did not discuss revised targets for time frames, providing no information on when SBA expects to reach the level of OIT field presence required under the SBJA. Moreover, the plan explained SBA’s difficulties in attracting qualified candidates, but the plan did not discuss how SBA would overcome the hiring challenges or discuss the potential for leveraging the resources of other export promotion entities that could provide similar export assistance as that provided by OIT staff. Some OIT staff told us that they are currently covering OIT work for a large geographic area. With the current OIT staff levels and locations, these staff have been meeting the needs of small businesses within their office’s jurisdiction but outside the metropolitan areas where they are based. They said they can do this by traveling to attend outreach events and by relying on referrals from staff of other export promotion entities with offices in various locations outside the OIT-served metropolitan areas. These other entities include Commerce, SBDC, or financial institutions that serve as lenders of SBA export loans. In light of the array of services provided by existing export assistance entities, SBA may be able to leverage other entities’ resources in fine-tuning its plan for hiring additional OIT field staff to meet the SBJA requirements, provided that SBA and other agencies clarify their roles and responsibilities and begin to exchange information on a regular basis. SBA’s increased responsibilities in the realm of export promotion have thrust it into an already crowded field of federal agencies providing small businesses with assistance. The challenges SBA now faces in limiting the extent to which its export promotion efforts overlap with those of other agencies highlight the need for SBA and other key export promotion agencies to further their collaboration efforts. SBA and the other agencies have collaborated on export promotion activities to some extent by developing strategies, conducting joint outreach events, and sharing some client information among agencies, but many challenges to effective collaboration remain. In particular, some of SBA’s OIT and SBDC activities overlap with those of Ex-Im and Commerce, respectively. Without clear definition of each entity’s roles and responsibilities, the overlapping export financing products offered by SBA and Ex-Im and the labor-intensive export counseling sessions provided by SBDCs and Commerce may cause confusion for small businesses and could result in duplication of efforts and inefficient use of government resources. Additionally, SBA and other federal agencies do not regularly or comprehensively share client information, which is a significant impediment to achieving effective interagency collaboration in providing export promotion assistance to small businesses. While some entities’ staff told us that they are limited in their ability to share client information, additional information sharing, where possible, would help improve client services and help agencies better track their impact in promoting U.S. exports. SBA has had mixed results in meeting SBJA requirements expanding its export promotion presence. On the positive side, SBA is well on its way to ensuring that a sufficient number of SBDC staff are better trained and prepared to assist small business exporters. On the negative side, however, SBA has fallen short of meeting the law’s specific requirements for increasing and placing OIT field staff, citing funding and hiring challenges. The most recent SBA report to Congress reiterates the agency’s intention to hire all the additional OIT staff required by the SBJA. However, SBA’s plan does not clearly identify the funding sources or time frames for hiring the additional staff, nor does it explain how it will address the hiring challenges it experienced previously. SBA needs to fill in those gaps in its OIT staffing plan, which presents an opportunity for the agency to step back and strategically reassess its plan and related resource allocation decisions in light of actions that could address the collaboration and information-sharing challenges identified in this report. We are recommending that the Administrator of the Small Business Administration take the following three actions: 1. To help small businesses understand and get the most benefit from the various export assistance products and services provided by different federal entities, and to efficiently use government resources, consult with Commerce and Ex-Im and more clearly define roles and responsibilities of export promotion entities’ export counseling and financing staff agencywide and at the local levels. 2. To improve collaboration and leverage available resources, consult with Commerce and Ex-Im and identify ways to increase, where possible, sharing of client information deemed useful for SBA, Commerce, and Ex-Im. 3. To more effectively implement SBA’s expansion of OIT field staff as required by the SBJA, update SBA’s plan for additional OIT staff to include funding sources and time frames, as well as possible efficiencies from clearly defining roles and responsibilities and leveraging other entities’ export assistance resources. We provided a draft of this report to SBA, Commerce, Ex-Im, USDA, USTDA, and State. In its written comments on the draft, which are reprinted in appendix II, SBA concurred with our recommendations and noted that SBA would work to implement the recommendations. We also received comments from Commerce stating that Commerce generally agreed with our findings and noting developments that had occurred since we provided them our draft—including the subsequent issuance of the 2012 National Export Strategy. More specifically, in response to our recommendation on the need to clarify agency roles and responsibilities, SBA and Commerce provided us with a copy of the December 2012 Interagency Communiqué developed by TPCC agencies. The communiqué, which we included in appendix II, was intended to clarify roles and responsibilities and provide guidance on referring U.S. businesses seeking export assistance to federal, state, and nonfederal resources according to each firm’s export readiness and business needs. We mentioned the communiqué in our report and noted that it did not include referral protocols for clients requiring trade financing products, which the communiqué said would be issued by the end of January 2013. The communiqué also notes that agencies intend to develop local Export Outreach Teams, to increase awareness of local international trade expertise and enhance communication and collaboration at the local level. Among other things, the Export Outreach Teams would develop referral protocols and initiate ongoing discussions of shared clients. Thus, the communiqué’s plans, when fully implemented, would begin to address two findings in this report: the need to clarify roles and responsibilities among SBA, Commerce, and Ex-Im and the need to identify ways to increase sharing of client information deemed useful for SBA, Commerce, and Ex-Im. Regarding our recommendation to increase sharing of client information where possible, SBA acknowledged its continued work with other agencies to integrate knowledge management within current legislative restrictions on information sharing, and noted its optimism about the potential for information technology to facilitate greater information sharing. SBA also noted that the SBDC program’s authorizing legislation prevents SBDCs from sharing specific client information outside of their network without prior written consent from the client, except under limited purposes. Furthermore, SBA agreed that it intends to respond to staffing requirements of the SBJA while acknowledging resource constraints in its next annual report to Congress. We also received comments from Ex-Im, USDA, and USTDA, clarifying information about their activities. We incorporated agencies’ technical comments throughout our report, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to appropriate congressional committees; the Administrator of the SBA; the Secretaries of Commerce, Agriculture, and State; the Chairman and President of Ex-Im; and the Director of USTDA. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4347 or yagerl@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objectives of this report were to (1) describe Small Business Administration’s (SBA) role within federal export promotion efforts; (2) assess the extent to which SBA collaborates with other agencies in its export promotion activities; and (3) assess the extent to which SBA is meeting Small Business Jobs Act (SBJA) requirements to expand export promotion training and staffing. In our review, we defined “export promotion activities” as programs and services conducted by federal agencies that involve direct contact with U.S. exporters and have export promotion as their stated goal. Such activities include providing small businesses with export counseling, training, and financial assistance; they do not include activities such as advocacy, commercial diplomacy, and policy development and negotiations. Our review, therefore, covers six key agencies—SBA, Department of Commerce (Commerce), the U.S. Export-Import Bank (Ex-Im), the Department of Agriculture (USDA), the U.S. Trade and Development Agency (USTDA), and the Department of State (State)— with a particular emphasis on SBA’s activities. We reported data from the Trade Promotion Coordinating Committee (TPCC) Secretariat on agencies’ requested budget authority for export promotion activities for fiscal year 2012, which was the most recent data available. We also analyzed TPCC data on SBA’s budget authority for export promotion activities for fiscal years 2006 through 2012. We determined that the data were sufficiently reliable for our purpose of illustrating SBA’s relative share of the overall federal export promotion budget. To address all of our objectives, we interviewed agency officials representing the key export promotion entities in headquarters and in six selected field locations—Chicago, Dallas, Irvine (California), Miami, New York, and Portland (Oregon). We selected these locations because they had at least two export promotion entities’ staff colocated, had the presence of Small Business Development Centers (SBDC) with staff providing export assistance, and ranked among locations with the highest export potential according to Commerce data. At some of these selected locations, we also met with representatives of small businesses that utilized federal government export assistance. The results of our interviews with officials at these six locations are not generalizable to agency officials’ views at all U.S. locations. To describe SBA’s export promotion activities within federal export promotion efforts, we analyzed government-wide initiatives, strategies, and TPCC’s and agencies’ documents and data. Our description of SBA and other agency activities is intended to be illustrative of the types of activities agencies engage in to support small business exports and is not exhaustive of all activities undertaken by each of the agencies. To assess the extent to which SBA collaborates with other agencies in its export promotion activities, we focused our assessment on coordination between SBA entities with Commerce and Ex-Im, the key agencies that provide similar export promotion activities to similar clients and that work together at the headquarters and field level, including at five of the selected field locations that feature all three agencies colocated in the same city. We analyzed government-wide initiatives, strategies, as well as TPCC’s and agencies’ documents, including the National Export Initiative and National Export Strategy. Additionally, we compared export promotion activities of Commerce and Ex-Im that are similar to SBA’s activities, based on our analysis of information provided agency officials. We also used the results of our discussions with agencies’ headquarters and field staff in six selected locations to assess the level of interagency collaboration. We assessed interagency coordination primarily against selected practices for enhancing and sustaining collaboration we previously identified. We selected four elements of collaboration best practices—establishing mutually reinforcing or joint strategies, reinforcing individual accountability for collaboration efforts, agreeing on roles and responsibilities, and leveraging resources to identify and address needs— because they allowed us to highlight the most critical and relevant elements of collaboration among export agencies at both the headquarters and field level. These elements also relate to issues identified in the 2011 National Export Strategy—for example the need for each SBA, Commerce, and Ex-Im to target clients and focus their strengths to achieve the common NEI goal to double U.S. exports. To assess the extent to which SBA is meeting SBJA requirements to expand export promotion training and staffing, we analyzed the SBJA and identified the specific export promotion requirements applicable to increasing the level of training of SBDC staff and increasing OIT staff numbers. We did not examine the other requirements under the SBJA. We analyzed SBA and SBDC documents and interviewed agency officials in headquarters and field locations to determine the implementation status of the SBJA requirements. We determined the extent to which SBDCs have met the certification requirement through an exam with the assistance of available training, but we did not assess the content or effectiveness of the certification, exam, or training. Furthermore, we reviewed SBA OIT’s September 2012 report to Congress which, according to SBA, contained the most current implementation status and plans for the SBJA requirements. We conducted this performance audit from February 2012 to January 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Adam Cowles, Assistant Director; Lina Khan; Victoria Lin; and Kara Marshall made key contributions to this report. The team benefited from the expert advice and assistance of David Dayton, David Dornisch, and Grace Lui.
In January 2010, the President announced the goal of doubling U.S. exports over 5 years. The President's plan in the National Export Initiative included prioritizing exports by small businesses and called for improved coordination among agencies involved in federal export promotion activities. Recently, Congress has also directed SBA to expand its export counseling and financing activities. This report (1) describes SBA's role within federal export promotion efforts, (2) assesses the extent to which SBA collaborates with other agencies in its export promotion activities, and (3) assesses the extent to which SBA is meeting requirements under the Small Business Jobs Act of 2010 to expand export promotion training and staffing. GAO analyzed agencies' documents and interviewed agency officials, including those in six selected field office locations serving areas with high export potential and where staff from at least two agencies were colocated. The Small Business Administration (SBA) and five other key agencies provide a variety of export promotion services to small businesses. In addition to outreach, which all six agencies conduct, SBA's primary activities include counseling and training, provided mainly through nonfederal partner entities called Small Business Development Centers, and export financing, provided through SBA's Office of International Trade (OIT). While SBA collaborates to some extent with other key agencies on its export promotion activities, additional collaboration could enhance agency efforts and reduce overlap. SBA, Department of Commerce (Commerce), and the U.S. Export-Import Bank (Ex-Im) coordinate some export promotion activities at headquarters and at field locations, but some services overlap. For example, Small Business Development Centers and Commerce both assist companies new to exporting as well as more experienced exporters, despite intentions to divide responsibilities for those types of firms. Additionally, OIT and Ex-Im offer similar financial products for small businesses, such as export working capital loan guarantees. Overlapping services may cause confusion for small businesses and result in inefficient use of government resources. SBA and other agencies developed a joint strategy to increase small business exports and, to varying degrees, the agencies have included collaborative efforts in the performance evaluations of staff with export promotion responsibilities. However, SBA and other agencies have not clearly defined roles and responsibilities, and efforts to leverage resources have not included regularly sharing client information where possible. Such sharing could help agencies improve client services and clarify each agency's impact in promoting U.S. exports. Enhancing collaboration could help agencies ensure they are working toward the goal of increasing exports by small businesses in a way that maximizes limited resources and mitigates overlap. SBA has made some progress in increasing export training of Small Business Development Center counselors but has experienced challenges in meeting increased OIT staffing requirements under the Small Business Jobs Act of 2010. The law required a certain number or percentage of staff working for the 63 Small Business Development Center networks to obtain export counseling certification. As of the end of September 2012, 73 percent of the networks had met this requirement, for which SBA set a 2013 deadline. To meet another requirement under the law, SBA needed to increase its OIT field staff, who primarily provide export financing assistance, from 18 to 30 staff, by the end of September 2012. However, SBA has only advertised for four temporary positions and filled two of them. SBA officials noted challenges in finding qualified candidates and lack of continued funding for additional OIT field positions. In a recent report to Congress, SBA stated its plans to hire the additional OIT staff but did not include funding plans or updated time frames to fill the positions. Furthermore, the plan did not discuss how SBA would overcome the hiring challenges or discuss the potential to leverage resources of other export promotion entities that also provide export assistance. GAO recommends that SBA (1) consult with Commerce and Ex-Im and clearly define export entities' roles and responsibilities; (2) consult with Commerce and Ex-Im and identify ways to increase, where possible, sharing of client information; and (3) update its plan for meeting mandated staffing requirements to include funding sources and time frames, as well as possible efficiencies from improved collaboration. SBA agreed with our recommendations and noted it is taking steps to address them. We also received technical comments from other key export promotion agencies, which we incorporated, as appropriate.
Since 1989, when the first drug court program was established, the number of drug court programs has increased substantially. In addition, DCPO’s oversight responsibilities and funding to support the planning, implementation, and enhancement of these programs have increased. As shown in figure 1, the number of operating drug court programs has more than tripled since our prior report from about 250 in 1997 to almost 800 in 2001 based on information available as of December 31, 2001. The number of operating programs that received DCPO funding, and thus were subject to its oversight, has also grown—from over 150 in fiscal year 1997 to over 560 through fiscal year 2001. As shown in figure 2, the number of drug court programs started by calendar year since our prior report has also increased. Although the number of drug court programs started in 2001 dropped, over 450 additional programs have been identified as being planned based on information available as of December 31, 2001. Based on information available as of December 31, 2001, drug court programs were operating in 48 states, the District of Columbia, and Puerto Rico. Only New Hampshire and Vermont had no operating drug court programs. Six states (California, Florida, Louisiana, Missouri, New York, and Ohio) accounted for over 40 percent of the programs. Appendix II provides information on the number of operating drug court programs in each state. Although there are basic elements common to many drug court programs, the programs vary in terms of approaches used, participant eligibility and program requirements, type of treatment provided, sanctions and rewards, and other practices. Drug court programs also target various populations (adults, juveniles, families, and Native American tribes). Appendix III provides details on the number of drug court programs by targeted population, and appendix IV provides details on the drug court programs by jurisdiction and the types of funding, if any, the programs have received from DCPO. Federal funding for drug court programs has also continued to increase. As shown in table 1, congressional appropriations for the implementation of DOJ’s drug court program has increased from about $12 million in fiscal year 1995 to $50 million in fiscal years 2001 and 2002. Since fiscal year 1995, Congress has appropriated about $267 million in Violent Crime Act related funding to DOJ for the federal drug court program. DCPO funding in direct support of drug court programs has increased from an average of about $9 million in fiscal years 1995 and 1996 to an average of about $31 million for fiscal years 1997 through 2001. Between fiscal years 1995 and 2001, DCPO has awarded about $174.5 million in grants to fund the planning, implementation, and enhancement of drug court programs. About $21.5 million in technical assistance, training, and evaluations grants were awarded. About $19.6 million were obligated for management and administration purposes and to fund nongrant technical assistance, training, and evaluation efforts. Since the inception of the DCPO drug court program, a total of $3 million in prior year recoveries have been realized. About $4.5 million through fiscal year 2001 had not been obligated. Congress appropriated an additional $50 million for fiscal year 2002. At the time of our review, DCPO was in the process of administering the fiscal year 2002 grant award program. Appendix V provides details on the number, amount, and types of grants DCPO awarded since the implementation of the federal drug court program. Since 1998, DCPO implementation and enhancement grantees have been required to collect, and starting in 1999, to submit to DCPO, among other things, performance and outcome data on program participants. DCPO collects these data semiannually using a Drug Court Grantee Data Collection Survey. This survey was designed by DCPO to ensure that grantees were collecting critical information about their drug court programs and to assist in the national evaluation of drug court programs. In addition, DOJ intended to use the information to respond to inquiries regarding the effectiveness of drug court programs. However, due to various factors, DCPO has not sufficiently managed the collection and utilization of these data. As a result, DOJ cannot provide Congress, drug court program stakeholders, and others with reliable information on the performance and impact of federally funded drug court programs. Various factors contributed to insufficiencies in DOJ’s drug court program data collection effort. These factors included (1) inability of DOJ to readily identify the universe of DCPO-funded drug court programs, including those subject to DCPO’s data collection reporting requirements; (2) inability of DOJ to accurately determine the number of drug court programs that responded to DCPO’s semiannual data collection survey; (3) inefficiencies in the administration of DCPO’s semiannual data collection effort; (4) the elimination of post-program impact questions from the scope of DCPO’s data collection survey effort; and (5) the insufficient use of the Drug Court Clearinghouse. DOJ’s grant management information system, among other things, tracks the number and dollar amount of grants the agency has awarded to state and local jurisdictions and Native American tribes to plan, implement, and enhance drug court programs. This system, however, is unable to readily identify the actual number of drug court programs DCPO has funded. Specifically, the system does not contain a unique drug court program identifier, does not track grants awarded to a single grantee but used for more than one drug court program, and contains data entry errors that impact the reliability of data on the type of grants awarded. For example, at the time of our review, the system contained some incorrectly assigned grant numbers, did not always identify the type of grant awarded, and incorrectly identified several grantees as receiving a planning, implementation, and enhancement grant in fiscal year 2000. These factors made it difficult for DCPO to readily produce an accurate universe of the drug court programs that had received DCPO funding and were subject to DCPO’s data collection reporting requirement. Although DOJ has been able to provide information to enable an estimate of the universe of DCPO-funded drug court programs to be derived, the accuracy of this information is questionable because DCPO has relied on the Drug Court Clearinghouse to determine the number of DCPO-funded drug court programs and their program implementation dates. One of the Drug Court Clearinghouse’s functions has been to identify DCPO-funded drug court programs. However, the Drug Court Clearinghouse has only been tasked since 1998 with following up with a segment of DCPO grantees to determine their implementation date. Thus, the information provided to DCPO on the universe of DCPO-funded drug court programs is at best an estimate and not a precise count of DCPO drug court program grantees. Noting that its current grant information system was not intended to readily identify and track the number of DCPO-funded drug court programs, DCPO officials said that they plan to develop a new management information system that will enable DOJ to do so. Without an accurate universe of DCPO-funded drug court programs, DCPO is unable to readily determine the actual number of programs or participants it has funded or, as discussed below, the drug court programs that should have responded to its semiannual data collection survey. According to DCPO officials, grantee response rates to DCPO’s semiannual survey have declined since DCPO began administering the survey in 1998. As shown in figure 3, the information in DCPO’s database indicated that grantee response rates declined from about 78 percent for the first survey reporting period (July to Dec. 1998) to about 32 percent for the July to December 2000 reporting period. However, results from our follow-up structured interviews with a representative sample of the identifiable universe of drug court programs that were DCPO grantees during the 2000 reporting periods revealed that DCPO did not have an accurate account of grantees’ compliance with its semiannual data collection survey. Based on our structured interviews, we estimate that the response rate to the DCPO data collection survey for the January to June 2000 reporting period was about 60 percent in contrast to the 39 percent response rate DCPO reported. Similarly, the response rate to the DCPO survey for the July to December 2000 reporting period was about 61 percent in contrast to the 32 percent response rate DCPO reported. The remaining programs did not respond or were uncertain as to whether they responded to DCPO’s data collection survey for each of the reporting periods in 2000. DOJ officials said that some of the surveys they did not receive may have been mailed to an incorrect office within DOJ. DCPO officials acknowledged that this type of error could be mitigated if DCPO routinely followed up with the drug court programs from which they did not receive responses. Furthermore, based on our follow-up structured interviews with a representative sample of DCPO-funded drug court programs that were listed as nonrespondents in DCPO’s database, we estimate that about 61 percent had actually responded to DCPO’s survey for the January to June 2000 reporting period. About two-thirds of these programs could produce evidence that they responded. For the July to December 2000 reporting period, we estimate that about 51 percent of the DCPO-funded drug court programs that were listed as nonrespondents in DCPO’s database had actually responded to the survey. About two-thirds of these programs could produce evidence that they responded. The requirement for grantees to submit DCPO’s semiannual survey is outlined in DOJ’s grant award notification letter that drug court program grantees receive at the beginning of their grant period. In addition, the survey is made available in the grantee application kit as well as on DCPO’s website. However, other than these steps, DCPO has not consistently notified its drug court program grantees of the semiannual reporting requirements nor has it routinely forwarded the survey to grantees. At the time of our review, DCPO had taken limited action to improve grantees’ compliance with the data collection survey requirements. DCPO officials said that they generally had not followed up with drug court program grantees that did not respond to the survey and had not taken action towards the grantees that did not respond to the semiannual data collection reporting requirement. Results from our follow-up structured interviews showed that DCPO had not followed up to request completed surveys from about 70 percent of the drug court program grantees that were nonrespondents during the January to June 2000 reporting period and from about 76 percent of the nonrespondents for the July to December 2000 reporting period. DCPO has had other difficulties managing its data collection effort. Specifically, (1) DCPO inadvertently instructed drug court program grantees not to respond to questions about program participants’ criminal recidivism while in the program; (2) confusion existed between DCPO and its contractor, assigned responsibility for the semiannual data collection effort, over who would administer DCPO’s data collection survey during various reporting periods; and (3) some grantees were using different versions of DOJ’s survey instruments to respond to the semiannual data collection reporting requirement. The overall success of a drug court programs is dependent on whether defendants in the program stay off drugs and do not commit more crimes when they complete the program. In our 1997 report we recommended that drug court programs funded by discretionary grants administered by DOJ collect and maintain follow-up data on program participants’ criminal recidivism and, to the extent feasible, follow-up data on drug use relapse. In 1998, DCPO required its implementation and enhancement grantees to collect and provide performance and outcome data on program participants, including data on participants’ criminal recidivism and substance abuse relapse after they have left the program. However, in 2000, DCPO revised its survey and eliminated the questions that were intended to collect post-program outcome data. The DCPO Director said that DCPO’s decision was based on, among other things, drug court program grantees indicating that they were not able to provide post-program outcome data and that they lacked sufficient resources to collect such data. DCPO, however, was unable to produce specific evidence from grantees (i.e., written correspondence) that cited difficulties with providing post-program outcome data. The Director said that difficulties have generally been conveyed by grantees, in person, through telephone conversations, or are evidenced by the lack of responses to the post-program questions on the survey. Contrary to DCPO’s position, evidence exists that supports the feasibility of collecting post-program performance and outcome data. During our 1997 survey of the drug court programs, 53 percent of the respondents said that they maintained follow-up data on participants’ rearrest or conviction for a nondrug crime. Thirty-three percent said that they maintained follow-up data on participants’ substance abuse relapse. Recent information collected from DCPO grantees continues to support the feasibility of collecting post-program performance and outcome data. The results of structured interviews we conducted in the year 2001 with a representative sample of DCPO-funded drug court programs showed that an estimated two-thirds of the DCPO-funded drug court programs maintained criminal recidivism data on participants after they left the program. About 84 percent of these programs maintained such data for 6 months or more. Of the remaining one-third that did not maintain post- program recidivism data, it would be feasible for about 63 percent to provide such data. These estimates suggest that about 86 percent of DCPO- funded drug court programs would be able to provide post-program recidivism data if requested. The results of structured interviews we conducted in the year 2001 with a representative sample of DCPO-funded drug court programs also showed that about one-third of the DCPO-funded drug court programs maintained substance abuse relapse data on participants after they have left the program. About 84 percent of these programs maintained such data for 6 months or more. Of the estimated two-thirds that did not maintain post- program substance abuse relapse data, it would be feasible for about 30 percent to provide such data. These estimates suggest that about 50 percent of DCPO-funded drug court programs would be able to provide post-program substance abuse data if requested. According to survey results collected by the Drug Court Clearinghouse in 2000 and 2001, a significant number of the drug court programs were able to provide post-program outcome data. For example, about 47 percent of the DCPO-funded adult drug court programs that responded to the Drug Court Clearinghouse’s 2000 operational survey reported that they maintained some type of follow-up data on program participants after they have left the program. Of these drug court programs, about 92 percent said that they maintained follow-up data on recidivism and about 45 percent said that they maintained follow-up data on drug usage. Of the DCPO-funded adult and juvenile drug court programs operating for at least a year that responded to the Drug Court Clearinghouse’s annual survey that was published in 2001, about 56 percent were able to provide follow-up data on program graduates’ recidivism and about 55 percent were able to provide follow-up data on program graduates’ drug use relapse. Operating under a cooperative agreement with DCPO, the Drug Court Clearinghouse has successfully collected performance and outcome data through an annual survey of all operating adult, juvenile, family, and tribal drug court programs, including those funded by DCPO. In addition, as previously noted, the Drug Court Clearinghouse has generally administered an operational survey to adult drug court programs every 3 years, including those funded by DCPO. The Drug Court Clearinghouse annually disseminates the results from its annual survey and has periodically published comprehensive drug court survey reports that provide detailed operational, demographic, and outcome data on the adult drug court programs identified through its data collection efforts. Although funded by DOJ, the Drug Court Clearinghouse has not been required to primarily collect and report separately on the universe of DCPO-funded programs. In addition, no comprehensive or representative report has been produced by DCPO or the Drug Court Clearinghouse that focuses primarily on the performance and outcome of DCPO-funded drug court programs. Instead, DCPO instructed the Drug Court Clearinghouse, in July 2001, to eliminate recidivism data from its survey publications. Although the Drug Court Clearinghouse has developed and implemented survey instruments to periodically collect and disseminate recidivism and relapse data, the DCPO Director had concerns with the quality of the self-reported data collected and the inconsistent time frames for which post-program data were being collected by drug court programs. In response to recommendations in our 1997 report, DOJ undertook, through NIJ, an effort to conduct a two-phase national impact evaluation focusing on 14 selected DCPO-funded drug court programs. This effort was intended to include post-program data within its scope and to involve the use of nonparticipant comparison groups. However, various administrative and research factors hampered DOJ’s ability to complete the NIJ-sponsored national impact evaluation, which was originally to be completed by June 30, 2001. As a result, DOJ fell short of its objective, discontinued this effort, and is considering an alternative study that, if implemented, is not expected to provide information on the impact of federally funded drug court programs until year 2007. Unless DOJ takes interim steps to evaluate the impact of drug court programs, the Congress, the public, and other drug court stakeholders will not have sufficient information in the near term to assess the overall impact of federally funded drug court programs. The overall objective of the NIJ-sponsored national evaluation was to study the impact of DCPO-funded drug court programs using comparison groups and studying, among other things, criminal recidivism and drug use relapse. This effort was to be undertaken in two phases and to include the collection of post-program outcome data. The objectives for phase I, for which NIJ awarded a grant to RAND in August 1998, were to (1) develop a conceptual framework for evaluating the 14 DCPO-funded drug court programs, (2) provide a description of the implementation of each program, (3) determine the feasibility of including each of these 14 drug court programs in a national impact evaluation, and (4) develop a viable design strategy for evaluating program impact and the success of the 14 drug court programs. The design strategy was to be presented in the form of a written proposal for a supplemental noncompetitive phase II grant. The actual impact evaluation and an assessment of the success of the drug court programs were to be completed during phase II of the study using a design strategy resulting from phase I. NIJ’s two-phase national impact evaluation was originally planned for completion by June 30, 2001. Phase I was awarded for up to 24 months and was scheduled to conclude no later than June 30, 2000. However phase I was not completed until September 2001—15 months after the original project due date. Phase II, which NIJ expected to award after the satisfactory submission of a viable design strategy for completing an impact evaluation, has since been discontinued. Various administrative and research factors contributed to delays in the completion of phase I and DOJ’s subsequent decision to discontinue the evaluation. The factors included (1) DCPO’s delay in notifying its grantees of RAND’s plans to conduct site visits; (2) RAND’s lateness in meeting task milestones; (3) NIJ’s multiple grant extensions to RAND that extended the timeframe for completing phase I and further delayed NIJ’s subsequent decision to discontinue phase II; and (4) the inability of the phase I efforts to produce a viable design strategy that was to be used to complete a national impact evaluation in phase II. Phase I of the NIJ-sponsored study was initially hampered by DCPO’s delay in notifying its grantees of plans to conduct the national impact evaluation. In November 1998, DCPO agreed to write a letter notifying its grantees of RAND’s plan to conduct the national evaluation. The notification letters were sent in March 1999. As a result, drug court program site visits, which RAND had originally planned to complete by February 1999, were not completed until July 1999. Although RAND completed most of the tasks associated with the national evaluation phase I objectives, it was generally late in meeting task milestones. The conceptual framework for the evaluation of 14 DCPO- funded drug court programs, which RAND was originally scheduled to complete by September 1999, was submitted to NIJ in May 2000—8 months after the original task milestone. This timeframe, according to RAND, was impacted by the delay in DOJ’s initiation of site visits. NIJ officials said that RAND also did not deliver a complete description and analysis of drug court implementation issues to NIJ, which was also due in September 1999, until it received the first draft of RAND’s report in March 2001. The feasibility study, which was originally scheduled to be completed by RAND in September 1999, was provided to NIJ in November 1999. This study informed NIJ of RAND’s concerns with the evaluability of some of the 14 selected DCPO sites. The viable design strategy proposal for evaluating program impact at each of the 14 drug court programs, which RAND was originally expected to complete by May 1999, was not completed. In addition, as discussed below and detailed in appendix VI, RAND was consistently late in meeting the extended milestones for delivery of the final product for phase I. Although RAND raised concerns in November 1999 regarding the feasibility of completing a national impact evaluation at some of the 14 selected DCPO sites, NIJ continued to grant multiple no-cost extensions that further extended the completion of phase I. The first no-cost grant extension called for phase I of the project to end by September 30, 2000; the second no-cost extension called for phase I to end by December 31, 2000; and the final extension authorized completion of phase I by May 31, 2001. Despite the multiple extensions and RAND’s repeated assurances that the phase I report was imminent, a final phase I report was not completed until September 18, 2001—21 months after the original milestone for completion of phase I. NIJ officials said that, in retrospect, they should have discontinued this effort sooner. Appendix VI provides additional details on the phase I delays in the NIJ-sponsored effort to complete a national impact evaluation. Phase I of the NIJ-sponsored national impact evaluation did not produce a viable design strategy that would enable an impact evaluation to be completed during phase II using the selected DCPO-funded drug court programs. RAND did offer an alternative approach. However, this approach did not address the original objective—to conduct a national impact evaluation. During its feasibility study, RAND rated the evaluability of the 14 program sites as follows: 4 - poor or neutral/poor, 5 - neutral, and 5 - neutral/good or good. In response, NIJ and DCPO asked RAND to consider completing the evaluation using those DCPO-funded program sites that were deemed somewhat feasible. RAND, however, was not receptive to this suggestion and did not produce a viable design strategy based on the 14 DCPO-funded programs or the subset of DCPO-funded programs that were deemed feasible to use in phase II to evaluate the impact of federally funded drug court programs. As a result, DOJ continues to lack a design strategy for conducting a national impact to enable it to address the impact of federally funded drug court programs in the near term. To address the need for the completion of a national impact evaluation, DCPO and NIJ are considering plans to complete a longitudinal study of drug-involved offenders in up to 10 drug court program jurisdictions. The DCPO Director said that the study would be done at a national level, and the scope would include comparison groups and the collection of individual level and post-program recidivism data. DOJ expects that this project, which is in its formative stage, if implemented, will take up to 4 years to complete—with results likely in year 2007. We recognize that it would take time to design and implement a rigorous longitudinal evaluation study and that if properly implemented, such an effort should better enable DOJ to provide information on the overall impact of federally funded drug court programs. However, its year 2007 completion timeframe will not enable DOJ to provide the Congress and other stakeholders with near-term information on the overall impact of federally funded drug court programs that has been lacking for nearly a decade. Despite a significant increase in the number of drug court programs funded by DCPO since 1997 that are required to collect and maintain performance and outcome data, DOJ continues to lack vital information on the overall impact of federally funded drug court programs. Furthermore, the agency’s alternative plan for addressing the impact of federally funded drug court programs will not offer near-term answers on the overall impact of these programs. Improvements in DCPO’s management of the collection and utilization of performance and outcome data from federally funded drug court programs are needed. Additionally, more immediate steps from NIJ and DCPO to carry out a methodologically sound national impact evaluation could better enable DOJ to provide Congress and other drug court program stakeholders with more timely information on the overall impact of federally funded drug court programs. Until DOJ takes such actions, the Congress, public, and other stakeholders will continue to lack sufficient information to (1) measure long-term program benefits, if any; (2) assess the impact of federally funded drug court programs on the criminal behavior of substance abuse offenders; or (3) assess whether drug court programs are an effective use of federal funds. To improve the Department of Justice’s collection of data on the performance and impact of federally funded drug court programs, we recommend that the Attorney General develop and implement a management information system that is able to track and readily identify the universe of drug court programs funded by DCPO; take steps to ensure and sustain an adequate grantee response rate to DCPO’s data collection efforts by improving efforts to notify and remind grantees of their reporting requirements; take corrective action towards grantees who do not comply with DOJ’s data collection reporting requirements; reinstate the collection of post-program data in DCPO’s data collection effort, selectively spot checking grantee responses to ensure accurate reporting; analyze performance and outcome data collected from grantees and report annually on the results; and consolidate the multiple DOJ-funded drug court program-related data collection efforts to better ensure that the primary focus is on the collection and reporting of data on DCPO-funded drug court programs. To better ensure that needed information on the impact of federally funded drug court programs is made available to the Congress, public, and other drug court stakeholders as early as possible, we also recommend that the Attorney General take immediate steps to accelerate the funding and implementation of a methodologically sound national impact evaluation and to consider ways to reduce the time needed to provide information on the overall impact of federally funded drug court programs. Furthermore, we recommend that steps be taken to implement appropriate oversight of this evaluation effort to ensure that it is well designed and executed, and remains on schedule. We requested comments on a draft of this report from the Attorney General. We also requested comments from RAND on a section of the draft report pertaining to its efforts to complete phase I of NIJ’s national evaluation effort. On April 3, 2002, DOJ provided written comments on the draft report (see app. VII). The Assistant Attorney General for the Office of Justice Programs noted that we made several valuable recommendations for improving the collection of data on the performance and impact of federally funded drug court programs and outlined steps DOJ is considering to address two of the six recommendations we make for improving its collection of data on the performance and impact of federally funded drug court programs. However, concerning the remaining four recommendations for improving DOJ’s data collection effort, DOJ does not specifically outline any plans (1) for taking corrective action towards grantees who do not comply with DCPO’s data collection reporting requirements; (2) to reinstate the collection of post program data in DCPO’s data collection effort, despite the evidence cited in our report supporting the feasibility of collecting post program data; (3) to analyze and report results on the performance and outcome of DCPO grantees; and (4) to consolidate the multiple DOJ-funded drug court program-related data collection efforts to ensure that the primary focus of any future efforts is on the collection and reporting of data on DCPO-funded programs. Although DOJ points out in its comments that a number of individual program evaluation studies have been completed, no national impact evaluation of these programs has been done to date. We continue to believe that until post-program follow-up data on program participants are collected across a broad range of programs and also included within the scope of future program and impact evaluations (including nonprogram participant data), it will not be possible to reach firm conclusions about whether drug court programs are an effective use of federal funds or whether different types of drug court program structures funded by DCPO work better than others. Also, unless these results are compared with those on the impact of other criminal justice programs, it will not be clear whether drug court programs are more or less effective than other criminal justice programs. As such, these limitations have prevented firm conclusions from being drawn on the overall impact of federally funded drug court programs. With respect to our recommendations for improving DOJ’s drug court program-related impact evaluation efforts, DOJ, in its comments, outlines steps it is taking to complete a multisite impact evaluation and its plans to monitor the progress of this effort and to provide interim information during various intervals. As discussed on page 18 of this report, this effort is intended to be done at a national level, and the scope is to include comparison groups and the collection of individual-level and post-program recidivism data. On April 1, 2002, RAND provided written comments on the segment of the draft report relating to DOJ’s efforts to complete a national impact evaluation (see app. VIII). In its comments, RAND, as we do in our report, acknowledges the need for improvements in the data collection infrastructure for DCPO-funded drug court programs. RAND notes its rationale for why it views the deliverables associated with phase I of the NIJ-sponsored national impact evaluation as being timely and notes that researchers generally have discretion to revise timelines and scopes of work, with the agreement of the client. However, as we point out in our report (pp. 17-18 and app. VI), RAND requested several no-cost extensions to complete the deliverables for various task milestones and did not produce a viable design strategy for addressing the impact of DCPO-funded drug court programs. In addition, NIJ officials said that RAND also did not deliver a complete description and analysis of drug court implementation issues to NIJ until it received the first draft of RAND’s report in March 2001. The deliverable RAND refers to in its comment letter was a paper that RAND had prepared for the National Institute on Drug Abuse, which NIJ never considered to be a product under the grant to evaluate the impact of DCPO-funded drug court programs. As we also pointed out in our report (p. 17 and app. VI), NIJ was not amenable to RAND changing the scope or methodology of the national impact evaluation effort. In addition, RAND commented that a “simple” evaluation design was expected. NIJ’s original objective, however, never called for a simple evaluation design, but rather a viable design strategy involving the use of comparison groups and the collection of post-program data. We conducted our work at DOJ headquarters in Washington, D.C., between March 2001 and February 2002 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will provide copies of this report to the Attorney General, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions about this report, please contact Daniel C. Harris or me at (202) 512-2758 or at ekstrandl@gao.gov. Key contributors to this report are acknowledged in appendix IX. Our overall objective for this review was to assess how well the Department of Justice (DOJ) has implemented efforts to collect performance and impact data on federally funded drug court programs. We specifically focused on DOJ’s (1) Drug Courts Program Office’s (DCPO) efforts to collect performance and outcome data from federally funded drug court programs and (2) National Institute of Justice’s (NIJ) efforts to complete a national impact evaluation of federally funded drug court programs. While there are drug court programs that receive funds from other federal sources, our review focused on those programs receiving federal funds from DCPO, which is DOJ’s component responsible for administering the federal drug court program under the Violent Crime Act. The scope of our work was limited to (1) identifying the processes DCPO used to implement its semiannual data collection effort; (2) determining DCPO grantees' compliance with semiannual data collection and reporting requirements; (3) determining what action, if any, DCPO has taken to monitor and ensure grantee compliance with the data collection reporting requirements; (4) identifying factors and barriers that may have contributed to a grantee's nonresponse and to delays in and the subsequent discontinuation of the NIJ-sponsored national evaluation of DCPO-funded programs; and (5) identifying improvements that may be warranted in DOJ's data collection efforts. To assess how well DCPO has implemented efforts to collect performance and outcome data from federally funded drug court programs, we (1) interviewed appropriate DOJ officials and other drug court program stakeholders and practitioners; (2) reviewed DCPO program guidelines to determine the drug court program grantee data collection and reporting requirements; (3) analyzed recent survey data collected by DCPO and the Drug Court Clearinghouse and Technical Assistance Project (Drug Court Clearinghouse) to obtain information on the number of drug court programs that have been able to provide outcome data; and (4) conducted structured interviews with a statistically valid probability sample of DCPO-funded drug court programs to determine (a) the programs' ability to comply with DCPO's data collection requirements, (b) whether the programs had complied with the data collection requirements, and (c) for those programs that did not comply with the data collection requirements, why they did not comply and what action, if any, DCPO had taken. For our structured interviews, we selected a stratified, random sample of 112 DCPO-funded drug court programs from a total of 315 drug court programs identified by DOJ as DCPO grantees in 2000. We stratified our sample into two groups based on whether the programs were listed in DCPO's database as respondents or nonrespondents to the required DCPO semiannual data collection survey in year 2000. To validate the accuracy of the list provided by DCPO, we compared the listing of 315 drug court programs identified as required to comply during a year 2000 reporting period with information on drug court program-related grant awards made by DCPO that was provided by OJP’s Office of the Comptroller to determine if the program was a DCPO grantee during the year 2000 reporting period. We defined a respondent as any drug court program grantee that was identified in DCPO's database as having responded to the DCPO survey during each applicable year 2000 reporting period. We defined a nonrespondent as a drug court program grantee that was identified in DCPO's database as not having responded to the DCPO survey during any applicable year 2000 reporting period. We used a structured data collection instrument to interview grantees. We interviewed 73 nonrespondents and 39 respondents. All results were weighted to represent the total population of drug court programs operating under a DCPO grant in year 2000. All statistical samples are subject to sampling errors. Measures of sampling error are defined by two elements, the width of the confidence intervals around the estimate (sometimes called the precision of the estimate) and the confidence level at which the intervals are computed. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. As each sample could have provided different estimates, we express our confidence level in the precision of our sample results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals based on the structured interviews will include the true value in the study population. All percentage estimates from the structured interviews have sampling errors of plus or minus 10 percentage points or less unless otherwise noted. For example, this means that if a percentage estimate is 60 percent and the 95 percent confidence interval is plus or minus 10 percentage points, we have 95 percent confidence that the true value in the population falls between 50 percent and 70 percent. We performed limited verification of the drug court programs in our sample that were identified as non-respondents in DCPO’s database to determine whether they were actually DCPO grantees in 2000. Data obtained from the drug court programs was self-reported and, except for evidence obtained to confirm grantee compliance with DCPO's year 2000 reporting requirements, we generally did not validate their responses. We also did not fully verify the accuracy of the total number of drug court programs, or universe of drug court programs, provided to us by DCPO and the Drug Court Clearinghouse. To assess DOJ's efforts to complete a national impact evaluation of federally funded drug court programs, we interviewed officials from (1) NIJ, who were responsible for DOJ's national evaluation effort; (2) DCPO, who were responsible for administering the federal drug court program under the Violent Crime Act; and (3) RAND, who were awarded the NIJ grant to complete phase I of the national evaluation effort. To identify the various administrative and research factors that hampered the completion of DOJ's national impact evaluation, we (1) interviewed NIJ and RAND officials who were responsible for the research project; (2) reviewed project objectives, tasks, and milestones outlined in NIJ's original solicitation and the NIJ approved RAND proposal and grant award; (3) reviewed correspondence between NIJ and RAND from 1998-2001; and (4) reviewed various project documents, including (a) RAND's evaluability assessment, (b) progress reports submitted to NIJ, (c) RAND's requests for no-cost extensions, (d) NIJ grant adjustment notices, (e) RAND's phase I draft report, and (f) RAND's phase I final report. Additionally, we compared project task milestones included in the NIJ approved RAND proposal with the actual project task completion dates. To determine the universe and DCPO funding of drug court programs, we (a) interviewed appropriate DOJ officials and other drug court program stakeholders and practitioners; (b) reviewed and analyzed grant information obtained from DOJ's Office of Justice Programs grant management information system and DCPO; (c) reviewed and analyzed information on the universe of drug court programs maintained by the Drug Court Clearinghouse; and (d) reviewed congressional appropriations and DOJ press releases. We attempted to verify information on the universe of DCPO-funded drug court programs, but as the findings in our report note, we were unable to do so due to inefficiencies in DOJ's drug court-related grant information systems. We were able to validate and correct some of the information provided by the various sources noted above through a comparison of the various databases noted and the primary data we had collected from drug court programs during our 1997 review and during our year 2001 follow-up structured interviews with a stratified, random sample of DCPO-funded drug court programs. We conducted our work at DOJ headquarters in Washington, D.C., between March 2001 and February 2002 in accordance with generally accepted government auditing standards. Based on information available as of December 31, 2001, drug court programs were operating in 48 states, the District of Columbia, and Puerto Rico. New Hampshire and Vermont were the only states without an operating drug court program but both have programs being planned. Guam also has programs being planned. California, Florida, Louisiana, Missouri, New York, and Ohio account for 344, or almost 44 percent, of the 791 operating drug courts. Figure 4 shows the number of operating drug court programs in each jurisdiction. Populations targeted by U.S. drug court programs included adults, juveniles, families, and Native American tribes. Table 2 shows the breakdown by target population of operating and planned drug court programs. As Table 3 shows, drug court programs in the United States vary by target population and program status and have received various types of grants from the DOJ Drug Courts Program Office (DCPO). Table 4 shows the number and total amount of DCPO grants awarded to plan, implement, or enhance U.S. drug court programs from fiscal years 1995 through 2001. NIJ issues solicitation for national evaluation of drug court programs NIJ awards grant to RAND RAND requests DCPO to write letters to 14 DCPO-funded sites regarding site visits for the national evaluation RAND submits written progress report to NIJ (no problems or changes were noted) Scheduled milestone for completion of site visits RAND informs NIJ that it was still awaiting DCPO introductory letter to 14 DCPO-funded sites DCPO sent letter notifying 14 sites of the national evaluation Scheduled milestone for completion of phase II design strategy Written progress report submitted by RAND (no problems or changes were noted) Scheduled milestone for completion of conceptual framework RAND provides evaluability assessment of 14 sites to NIJ noting feasibility concerns RAND requests conference with NIJ to discuss evaluability assessment NIJ informs RAND that DCPO still wants impact evaluations on some of the 14 sites RAND submits conceptual framework for 14 sites to NIJ NIJ and DCPO review the conceptual framework NIJ informs RAND that the report on the results of phase I must be submitted prior to the submission of a phase II proposal DCPO requests findings from RAND RAND requests guidance about conceptual framework paper RAND requests the first no-cost extension through September 30, 2000 NIJ informed RAND that phase I findings should be submitted in writing before RAND submits a proposal for phase II. RAND informed NIJ that a report on phase I findings would be completed by November 2000 RAND submits written progress report to NIJ noting their findings, an alternative strategy, and their request for a no-cost extension to enable RAND to bridge the time period between phase I and phase II NIJ grants RAND its first no-cost extension through September 30, 2000 DCPO and NIJ inquire about the status of the phase I draft report. NIJ reminds RAND of the original project requirements for an impact evaluation in phase II RAND inquired about whether the phase I grant would be extended beyond September 30, 2000 NIJ asked RAND to complete the phase I report by September 30, 2000, and reiterated to RAND that any proposals for phase II should address original solicitation objectives NIJ gives RAND the option to (1) let the phase I grant end and prepare the phase II proposal for a new grant or (2) extend the phase I project timeline to allow time for review of a phase II proposalRAND requested second no-cost extension NIJ grants no-cost extension to RAND extending completion of phase I until December 31, 2000. NIJ also inquires about status of draft and reminds RAND that draft must be submitted before a phase II proposal is accepted. RAND agreed RAND presented results from phase I at American Society of Criminology Conference noting that the phase I report would be available by the end of December In response to an NIJ inquiry, RAND informs NIJ that a phase I draft report would be completed by the end of January 2001 (NIJ did not extend the grant) In response to an NIJ inquiry, RAND informs NIJ that the phase I draft report would be completed in February 2001 Written progress report submitted by RAND noting that a draft report will be submitted to NIJ in February 2001 (no problems were noted) RAND informs NIJ that a draft phase I report will be completed in March 2001. NIJ grants third no-cost, extension to RAND extending completion of phase I until May 31, 2001 to allow for peer review of the forthcoming draft report NIJ receives draft phase I report and submits draft to peer reviewers NIJ informs RAND that phase II plans are uncertain NIJ sends peer review results to RAND and inquires as to when final report could be expected. NIJ provides RAND with specific instructions to eliminate the alternative phase II proposal from the finalphase I report noting that RAND's alternative proposal was so different from the project objective that it would be inappropriate to continue the effort RAND meets with NIJ to discuss phase I effort and completion of final report. RAND informs NIJ that the final report will be completed by the end of July 2001 Written progress report submitted by RAND (no problems or changes noted) The following are GAO comments on DOJ’s letter of April 3, 2002. 1. In his reviews, Dr. Belenko noted that the long-term post-program impact of drug courts on recidivism and other outcomes are less clear—pointing out that the measurement of post-program outcomes other than recidivism remains quite limited in the drug court evaluation literature. He also noted that the evaluations varied in quality, comprehensiveness, use of comparison groups, and types of measures used and that longer follow-up and better precision in equalizing the length of follow-up between experimental and comparison groups are needed. 2. Dr. Belenko noted that the evaluations reviewed were primarily process, as opposed to impact, evaluations. He also noted that a shortcoming of some of the drug court evaluations was a lack of specificity about data collection time frames—pointing out that several studies lacked a distinction between recidivism that occurs while an offender is under drug court supervision and recidivism occurring after program participation. Charles Michael Johnson, Nettie Y. Mahone, Deborah L. Picozzi, Jerome T. Sandau, David P. Alexander, Douglas M. Sloane, and Shana B. Wallace made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
In exchange for the possibility of dismissed charges or reduced sentences, defendants with substance abuse problems agree to be assigned to drug court programs. In drug courts, judges generally preside over the proceedings; monitor the progress of defendants; and prescribe sanctions and rewards in collaboration with prosecutors, defense attorneys, and treatment providers. Most decisions about drug court operations are left to local jurisdictions. Although programs funded by the Drug Court Program Office (DCPO) must collect and provide performance measurement and outcome data, the Department of Justice (DOJ) has not effectively managed this effort because of (1) its inability to readily identify the universe of DCPO-funded drug court programs, including those subject to DCPO's data collection reporting requirements; (2) its inability to accurately determine the number of drug court programs responding to DCPO's semiannual data collection survey; (3) inefficiencies in the administration of DCPO's semiannual data collection effort; (4) the elimination of post-program impact questions from the data collection survey effort; and (5) the lack of use of the Drug Court Clearinghouse. Various administrative and research factors have also hampered DOJ's ability to complete the two-phase National Institute of Justice-sponsored national impact evaluation study. As a result, DOJ continues to lack vital information needed to determine the overall impact of federally funded programs and to assess whether drug court programs use federal funds effectively.
ACF, a major program office within HHS, is responsible for programs that promote the economic and social well-being of low-income and disadvantaged children, families, and their communities. Headed by the Assistant Secretary for Children and Families, ACF’s seven program offices and various staff offices offer policy direction, information services, and funding through a variety of grants to third-party service providers such as state and local governments and nongovernmental organizations. Of the more than $45 billion provided for in fiscal year 2002, over 85 percent went to just five program areas administered by third parties: Temporary Assistance for Needy Families ($16.7 billion), Head Start ($6.5 billion), Foster Care and Adoption Assistance ($6.6 billion), Child Care ($4.8 billion), and Child Support Enforcement and Family Support ($4 billion). ACF’s work is driven by the four GPRA goals indicated in its annual performance plan: (1) increase economic independence and productivity for families, (2) improve healthy development, safety, and well being of children and youth, (3) increase the health and prosperity of communities and tribes, and (4) build a results-oriented organization. As we have previously reported, ACF linked these goals to its funding request by aggregating and consolidating program activities from multiple budget accounts and linking the associated funding requests to sets of performance goals, which it referred to as “objectives” of these four main goals. In fiscal year 2002, ACF’s leadership also established nine key priorities to provide targeted opportunities for collaboration on mission- critical crosscutting activities. As figure 1 shows, ACF is headquartered in Washington, D.C., and has 10 regional offices within five broad geographic areas of the country known as hubs. Regional offices contain about 50 percent of all ACF employees and are responsible for administering most of ACF’s programs and ensuring that program and administrative funds are spent in accordance with ACF goals and initiatives. Headquarters is responsible for setting policy, budget formulation, strategic planning, and legislative affairs. Table 1 describes in further detail the responsibilities of key offices and positions as they pertain to strategic planning and budgeting as well as regional- headquarters operations and relations. As a subordinate unit in HHS, ACF is not an independent entity; its processes, activities, and goals must be seen in the context of the general strategic direction in which HHS is moving. For example, HHS’ requirement that its agencies provide performance information with agency funding requests for fiscal year 2003 is an outgrowth of the GPRA planning process and recent attention to the need for timely and reliable performance information with which to evaluate programs. In keeping with its One Department initiative, HHS’ desire to present a more standardized performance plan for fiscal year 2004 requires its constituent agencies to reduce their total number of performance measures by at least 5 percent while simultaneously increasing their outcome measures by at least 5 percent. The need to respond effectively to these and other priorities and initiatives has led to changes in ACF’s work planning processes and/or how performance is evaluated in the regional offices we visited. Many factors affect the nature of ACF’s budget and planning process. For example, much of ACF funding, including to some extent, how “discretionary” funds may be spent, is directed by statute. For example, because of the prescriptive nature of the funding requirements in the Head Start authorizing legislation, nearly 20 percent of the $338.5 million increase Head Start received in fiscal year 2002 was designated for teacher salary increases. This limits the extent to which ACF controls how Head Start funds are spent. Further, over 70 percent of ACF’s budget funds mandatory programs in which funding levels are determined by formula for disbursement or eligibility rules regardless of program performance. However, ACF officials told us that, as required by HHS, ACF has taken steps to connect resources and performance by linking the incremental request to key ACF priorities and goals. While this does not explicitly lead to performance-based budget decisions, linking funding requests to expected performance is, as we have previously reported, the first step in defining the performance consequences of budget decisions. As discussed later in this report, it is during budget execution, for mandatory and discretionary programs alike, that ACF’s use of training and technical assistance (T/TA) and travel funds and use of staff resources currently show the strongest link between resources and results. To address the objectives in this report, we selected two regional offices (Region VI, Dallas, and Region IX, San Francisco) and three diverse programs (Head Start, Child Support Enforcement, and the Community Services Block Grant) that represent ACF’s self-described best examples of how managers used performance information to inform the resource allocation process. We also obtained staff and management views on the challenges to further budget and performance integration. More detailed information on our scope and methodology, including fuller descriptions of the programs we studied, is in appendix III. A glossary follows that appendix. We conducted our work from January through May 2002 in accordance with generally accepted government auditing standards. Formulation of ACF’s budget and its performance plan are closely related but they are not fully integrated. ACF’s budget and performance plan are based on joint budget and planning guidance issued by HHS in the spring, and the funding request is linked to ACF’s GPRA goals. However, formulation does not begin with evaluating past program performance to inform the upcoming year’s budget request and performance plan. Budget and planning become more closely aligned when the budget request and annual performance plan are sent to the HHS budget and planning staff for review. Finally, allocating resources based on performance is most integrated into day-to-day management during budget execution, which is largely decentralized to the regional offices. (OLAB and OPRE, ACF’s budget and planning offices, respectively, play small roles in this part of the process.) Figure 2 depicts a timeline for a typical ACF budget and planning cycle as well as the roles and responsibilities of the various key players at each stage of the process. ACF’s GPRA planning process follows the budget process and must be completed according to the budget timeline, but formulation does not begin with a formal look-back at program performance to help shape the upcoming year’s budget request and performance plan; thus, the processes are not yet completely integrated. However, ACF does link its funding requests for program activities to GPRA goals. OLAB and OPRE work together to review and clarify the HHS budget and planning guidance, and, as appropriate, distribute supplemental guidance throughout ACF. Also, officials describe frequent communication throughout the formulation process. This is in keeping with practices we have previously reported as those an agency can use to link performance information to the budget process. OLAB is responsible for developing headquarters’ salaries and expense (S&E) budgets with input from program offices. Meanwhile, program offices and regions develop program budgets and regional S&E budgets, respectively, with OLAB ensuring that these budgets align with assumptions outlined in HHS guidance. OLAB also ensures that ACF’s budget package as a whole supports ACF’s priorities and the department’s and OMB’s external monitoring and reporting requirements. OPRE oversees the preparation of ACF’s annual performance plan and provides guidance, analysis, and T/TA to the program units as they develop the plan’s substance. For example, in addition to HHS’ guidance—which includes a standardized format for the performance plan and a description of the types of information to be included in each section—OPRE provides a template that combines an example of an ACF program performance plan with a section-by-section explanation of HHS guidance, as well as tips on content. Figure 3 shows an excerpt from OPRE’s template, with shaded areas representing OPRE’s explanations and guidance. The alignment between budget and planning increases during the second phase of ACF budget formulation when HHS planning staff works with ACF to help ensure that the proposed plan and budget are consistent. As an example, senior HHS planning staff described an instance last year in which the ACF draft performance plan showed a particular discretionary program improving its performance by 5 percent a year, but HHS questioned the feasibility of the performance goal since ACF did not request additional funds for that program. ACF agreed to revisit this goal but, because it was set collaboratively with states, did not change it. HHS also requires its operating divisions to present their budgets to the Secretary’s Budget Council (SBC). The Council is chaired by the Assistant Secretary for Budget, Technology, and Finance, and made up of HHS assistant secretaries and other members of HHS senior leadership. The presentations provide an opportunity for each HHS operating division to present its budget and discuss its proposals for addressing the Secretary’s initiatives such as fatherhood and healthy marriages. Based on these presentations, the Council makes recommendations on HHS’ budget package. HHS budget staff refines these recommendations and presents a final budget package to the Secretary for his decision. HHS also uses the SBC presentations in the push to more closely relate budget formulation and program performance. Last year, for the first time, HHS required its agencies to present to the Council performance information on their programs. In this first effort, capturing the quality of program results without overwhelming the Secretary with information proved difficult. As a result, the presentation did not afford information robust enough to use at the program level—the level at which the Secretary makes decisions. In hopes of better informing the fiscal year 2004 budget process, and in light of OMB’s decision to publish PART ratings for selected federal programs in the 2004 budget, HHS officials told us that they required the operating divisions to score their 31 selected programs using PART. These scores, along with PART scores independently derived by HHS staff and, where available, OMB PART scores were included in the SBC budget presentations. When the Secretary received the SBC’s budget recommendations, PART assessments for 31 of HHS’s approximately 300 programs were also available. Officials hoped that structuring the information this way would make it easier for the Secretary to use. Budget and planning are more fully integrated during budget execution; that is, at the operational level in the regions where ACF programs are generally administered on a day-to-day basis. While the budget execution process varies among hubs and regions, all regions are required to develop and operate according to work plans that link program and agency goals and objectives with expected performance. Regions are expected to spend their funds in accordance with these plans which are to articulate activities and projects to be completed that year and how projects connect to key ACF priorities and goals. Figure 4 depicts an excerpt from a hub work plan. In the offices we visited, ACF employs various strategies to help ensure that resource allocation is driven by program performance, thus strengthening the link between resources and goals. At the program level, ACF officials told us that training and technical assistance (T/TA) and salaries and expense funds are often allocated based on program performance and needs. Collaborative strategies are used at both the local and federal levels to more effectively address common goals and strengthen resource allocation decisions. Finally, managers and staff in both regions told us that organizing and allocating staff resources based on agency goals and program needs helps them feel connected to and responsible for the results their programs achieve as well as the national priorities towards which they are working. Dallas and San Francisco regional staff prioritize and allocate T/TA resources according to agency goals and program needs. For example, Dallas staff uses the Head Start Monitoring and Tracking System and the annual Child Support Enforcement (CSE) self-assessments and financial audits to identify grantees that have or are likely to have T/TA needs during the fiscal year. Based on these assessments, staff create and follow work plans that focus their efforts on those grantees throughout the year. Identifying problems early and working collaboratively helps address and correct issues promptly and constructively. These strategies pay off even when a grantee can not be saved. Dallas officials told us that in Head Start they believe relationships forged with grantees are largely responsible for the ability of program managers to convince a grantee voluntarily to relinquish its grant—a less costly and time-consuming process than if ACF were to forcibly terminate a grant. For example, last fiscal year ACF discovered that a Head Start grantee overspent its prior year federal Head Start funds. ACF explained to the grantee that it was unable to provide the grantee with additional funding for its Head Start program. Because the grantee could not run its program without receiving additional funds, ACF recommended that the grantee consider relinquishing the grant. As a result of ACF's recommendation, the grantee voluntarily relinquished its program due to financial mismanagement. Dallas officials estimate that the federal government can save as much as $50,000 in legal fees for each grant that is relinquished versus terminated. Further, ACF officials told us, Head Start children and families experience less disruption in service delivery when a grant is relinquished rather than terminated. San Francisco officials told us that they are beginning to use the new Grant Application and Budget Review Instrument (GABI) in conjunction with other information to compare actual grantee performance to performance targets. GABI’s national cost data help identify applicants with unusually high administrative costs, teacher/classroom ratios, etc. ACF was better able to achieve CSE program goals by partnering with states to create a CSE national strategic plan based on common goals. ACF reports that states and ACF developed and agreed upon the plan’s four goals, related objectives, and indicators. These goals, objectives, and indicators are aligned with the CSE-related portion of the ACF GPRA performance plan. Because ACF and states define and measure the CSE program’s achievements with the same yardstick, they now work together towards a common purpose. Furthermore, the GPRA performance measures are the same as those used to determine each state’s CSE incentive payment. In theory, these payments are intended to reward states that meet the performance measures. As a result, states have an incentive to work towards the GPRA measures and ACF can report on state and program performance and explicitly show what level of program performance was achieved nationally for the level of funding in a particular year. Even though a large percentage of funds is driven by formula or eligibility, strategies that leverage resources from a variety of sources and knowledge about grantees’ capacity to deliver services can lead to more informed resource allocation decisions during budget formulation and execution. For example, the administration planned to request funds in fiscal year 2002 for a new program for maternity group home services. ACF had internal discussions as to the legislative authority under which the funds should be requested. Both the Runaway and Homeless Youth Act (RHY) and Temporary Assistance for Needy Families (TANF) programs were mentioned as possible candidates. Although at first TANF seemed the more natural choice, ACF ultimately requested the program funding under RHY based on information from regional officials about the state agencies and community providers in their regions as well as their ability to successfully administer these programs—information that headquarters staff may have been too far removed from program implementation to observe. Theoretically, this money will be better spent and program goals are more likely to be achieved than if the funds were appropriated through TANF. The Dallas and San Francisco regional offices also described several performance-informed resource allocation decisions that occurred during budget execution. For example, the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 gave tribes the opportunity to run their own CSE programs. These programs are directly related to ACF’s first and third GPRA goals: to increase economic independence and productivity for families (goal 1) and to increase the health and prosperity of communities and tribes (goal 3). ACF has begun to direct resources to activities that will prepare tribes to fully support these programs. For example, Dallas officials told us that Dallas’ record for achieving results and its experience with CSE tribal demonstration projects and special improvement projects were part of the reason it received additional funds from headquarters in fiscal year 2001 to develop and pilot a training curriculum for new federal tribal child support specialists; the program was subsequently approved by the CSE program office for use nationwide. In fiscal year 2002, the West-Central Hub again received additional funds to assess tribal/state court relationships within the Hub and identify best practices to be shared nationwide. Regional staff works towards achieving ACF goals and moving the administration’s agenda forward by expending T/TA resources on assisting state and local programs that are already providing compatible services. For example, in Texas, Dallas staff helped the state incorporate the administration’s “Good Start, Grow Smart” early literacy initiative into the training curriculum for the CIRCLE initiative, an existing early literacy training program for Texas Head Start teachers. Similarly, San Francisco staff worked with Arizona state officials to tap into existing programs in Arizona aimed at increasing young fathers’ financial responsibility for their children and use these programs as a vehicle to support the administration’s initiatives to promote responsible fatherhood and healthy marriages. Arizona has implemented a program that helps couples learn relationship, communication, and listening skills to promote healthy marriages. Through participation in and representation on interagency councils, ACF seeks to use its resources more efficiently to achieve its goals that cut across HHS and other federal departments. For example, the San Francisco region participates in the Region IX Federal Regional Council (FRC), an interagency body that seeks to foster efficiency and effectiveness through intergovernmental and public/private partnerships to achieve administration goals and priorities. After determining that several of its federal agency members were planning community events to address the administration’s faith-based initiative, FRC established a working group to share information and coordinate activities, and ACF engaged with other FRC members to organize a youth seminar on the topic. ACF also participated in FRC task forces to address economic development, social, health, and environmental issues in North Richmond and East Palo Alto, two low-income communities in the San Francisco region. These efforts have resulted in community improvements such as expanded Head Start services, employment of TANF recipients, and holiday toy and book drives for needy children. Region 9 officials also participated in community meetings in both areas and provided information on ACF funding opportunities, programs, and services in North Richmond. In addition, San Francisco is involved in an FRC initiative focused on employment and economic development strategies in the San Joaquin Valley in support of the administration’s effort to move the welfare reform agenda forward. Similarly, the HHS Regional Managers’ Council helps HHS agencies and components in Region IX work together to achieve crosscutting goals. For example, the Centers for Medicare and Medicaid Services, ACF, and the Health Resources and Services Administration joined forces to address State Children’s Health Insurance Program (SCHIP) and Medicaid issues to increase enrollment among underserved minority children. ACF’s Office of Community Services (OCS) and its Regional Liaison initiative further illustrate the value of collaboration in achieving outcomes and goals. To promote the Community Services network—agencies that create, coordinate, and deliver programs and services to low-income Americans—as an asset to other regional activities and to address crosscutting needs at the local level, ACF’s OCS and Dallas piloted an OCS Regional Liaison in 1997. Based on the success of the pilot, OCS liaisons were designated in each region in 1998. As an example of the central role played by the liaisons, the lead liaison in Dallas was instrumental in developing a Head Start Early Alert System that was eventually implemented nationwide. OCS also seeks to broker support for community concerns and goals and to leverage network resources to help fulfill the administration’s initiative on community partnerships. For example, OCS has partnered with the Health Resources and Services Administration to improve services to low-income people by bringing together community health centers and CAAs to address health concerns at the community level. To address awareness and access concerning public benefits for the aging and disabled community, OCS has partnered with the National Council on Aging and the faith-based community on education and outreach efforts. OCS, in partnership with the Community Services network, created “Results Oriented Management and Accountability” (ROMA). OCS describes ROMA as a goal-oriented framework that binds and holds accountable a local network of community action agencies in a standardized way while allowing them the flexibility to develop their own processes and outcomes consistent with local preferences and state objectives. ROMA is based on the six national performance goals related to the Community Services Block Grant (CSBG) program and balances family-, community-, and agency-level program outcomes. Although participation in ROMA itself is voluntary, the CSBG statute requires all states to participate in a performance measurement system by fiscal year 2001—either ROMA or another one. OCS is trying to achieve full ROMA implementation in time for the fiscal year 2003 CSBG program reauthorization. Creating a flexible workforce that can work across program boundaries allows staff to work together to achieve outcomes and focus on total performance throughout a state rather that on individual program outcomes. Most of the ACF/San Francisco office is organized into “state teams” in which staff are responsible for multiple programs within a subset of states in the region rather than being responsible for a single program across the region. Officials told us that this organization allows them to shift duties as necessary when agency priorities shift. For example, an employee on the Arizona/Nevada team was able to shift from working with Nevada on child support issues to working with Arizona on the Child and Family Services Review, a labor-intensive effort, where more staff were needed. In other instances, staff were able to focus on ACF’s crosscutting priorities (e.g., strengthening marriage), which support the purposes of various programs, rather than on each individual program to meet the administration's vision for ACF. On the California team, staff primarily responsible for the TANF and Child Care programs actively work together and support each other as needed. Dallas created “21st Century Specialists,” employees with multidisciplinary, broadly defined position descriptions that allow them to carry out a variety of functions within and across programs because their position descriptions set general performance standards not tied to specific functions or programs. Dallas officials reported that, given the opportunity to explore and implement new ways to achieve goals, staff have begun to identify crosscutting opportunities and form natural partnerships among programs in order to achieve desired outcomes. While ACF has progressed in better aligning its resources with program goals and desired results, almost all managers and staff we spoke with recognized that strengthening the link between resources and results is a work in progress, and that many challenges still need to be addressed before ACF can more fully integrate budget and planning. ACF has identified several significant barriers to further linking resources and results, including the effects third-party providers have on its ability to either influence program outcomes or to collect and report program performance information; difficulties in determining a particular program’s effectiveness; and the organizational culture change required to support more results-oriented operations. ACF has begun to identify and implement mitigation strategies to address these issues. ACF conducts much of its work through “third parties”—states, localities, and other non-federal service providers—which often limits the extent to which ACF directly influences program outcomes. This is especially true since many ACF programs by law provide grantees flexibility in how federal funds may be spent. Although program activities must meet the general federal purposes of the program, ACF’s grantees are able to make funding choices that may not support the achievement of specific national performance goals or performance targets. Third-party issues can also affect ACF’s ability to report on program results promptly and consistently. For a number of major programs, ACF relies on state administrative data systems for performance information. In many cases, final reports are due 90 to 120 days or more after the federal fiscal year ends, creating a delay in available data. Moreover, many programs contain voluntary requirements that give grantees great flexibility in reporting. As previously discussed, ACF has successfully used collaborative strategies to get providers to buy into and work towards national priorities. ACF has worked to help its service providers develop an understanding of ACF’s GPRA responsibilities and the importance of consistent, prompt, and accurate performance data collection and reporting. ACF used ROMA to respond to the administration’s emphasis on results-based, client-focused accountability and enjoyed a 75 percent implementation rate by fiscal year 1999 even though participation in a performance measurement system was not required until fiscal year 2001. When data collection issues arose as one of the most significant barriers to full ROMA implementation, OCS pledged to use a significant portion of its technical assistance resources and administrative support activities to implement ROMA across the network, including helping grantees increase their capacity for data collection and reporting. ACF is also working with the HHS Data Council to assess unmet data needs for major programs, and, using the collaborative methods described in this report, is progressing in getting grantees to agree to consistent data definitions and reporting requirements for some programs. Since ACF is part of a network of federal, state, local, and nongovernmental efforts aimed at improving long-term social health and social outcomes, attributing a particular outcome to any particular effort can be a great challenge. Further, because outcomes may not be known for many years, annually measuring the results of these collective investments much less any one part is often difficult and may not be particularly useful. To help mitigate these problems, ACF uses information from program evaluations and has also begun to identify intermediate outcomes and monitor progress towards them. For example, Head Start is currently undergoing a 6-year study intended to establish evidence of a link between outputs and outcomes for the Head Start program. The study will compare outcomes for Head Start children to non-Head Start children while controlling for socioeconomic factors, parenting practices, and demographics. It will then determine conditions that positively or negatively affected the outcomes. ACF and regional staff have also offered training to their employees to help them better understand and articulate the link between program outputs and outcomes, and to develop intermediate performance outcomes and targets necessary to show progress towards longer-term goals. ACF and HHS officials repeatedly told us that the culture change necessary to support and strengthen the linkages between resources and results takes time but is beginning to take root. Some managers and staff reported a noticeable difference over time in employees’ understanding of and ability to define measurable outcomes linked to agency goals and initiatives as well as a desire to hold employees accountable for achieving results. For example, goal-oriented, project-based work plans have become the standard in the regions we visited. Also, performance contracts for both managers and staff are now or soon will be tied to agency goals and initiatives, and are viewed as increasingly focused on outcomes. Managers and staff also report a clearer understanding of the difference between outputs and outcomes, and the use of outcome measures is becoming more common. For example, after providing the training described above on program outputs and outcomes, San Francisco managers reported noticeable improvement in the use and nature of outcomes described in unit workplans. In Dallas, employees are beginning to create their own performance goals—stepping stones to longer-term goals—for which they are held accountable each year. Regional managers told us that they have also begun to help program staff break down 5-year program outcomes into 1-year targets geared towards elements states and grantees can accomplish within the reporting time frame. We requested comments on a draft of this report from The Policy Exchange/Institute for Educational Leadership and the Department of Health and Human Services. The Policy Exchange agreed with the substance of the report and we incorporated its technical comments as appropriate. It also made suggestions for future GAO work in this area. HHS generally agreed with the substance of the report and submitted technical comments that were incorporated as appropriate. HHS disagreed with our use of the term “budget execution” to describe their regional offices’ role in resource allocation decisions, which they characterize as “program implementation.” We view budget execution as a management function that is broader than those activities traditionally performed by a central budget office. A glossary of terms can be found at the end of this report. In addition, we provided drafts of the Dallas and San Francisco regional office appendixes to the appropriate regional officials for technical review and have incorporated their comments where appropriate. As agreed with your office, we are sending copies of this report to the Secretary of Health and Human Services, appropriate Congressional committees, and other interested parties. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. Please contact me on (202) 512-9573 or Denise Fantone, Assistant Director, on (202) 512-4997 if you or your staff have any questions about this report. Major contributors to this report were Amy Friedlander, Jackie Nowicki, Keith Slade, and James T. Whitcomb. The Administration for Children and Families’ (ACF) West-Central Hub is responsible for carrying out ACF programs and initiatives in the 11-state Hub area. When the Hub was created in 1996, the Hub Director created program teams in its two regional offices—Dallas and Denver—and assigned lead program responsibilities to each region based on its strengths. Thus, Dallas has the lead for the Developmental Disabilities, Runaway and Homeless Youth, Technology, and Early Head Start programs. Denver has the lead for Child Welfare, Child Care, and Head Start programs. Also, cross-Hub teams, using staff from both regions, coordinate crosscutting issues and provide a unified approach to meeting the needs of states and other grantees in the Hub. Figure 5 shows how the West-Central Hub is organized. The Regional Hub Director is located in Dallas and is responsible for providing leadership and guidance to all partners (for example, grantees, state, and local governments) in the Hub. Dallas (Region VI) serves Arkansas, Louisiana, New Mexico, Oklahoma, and Texas. Denver (Region VIII) serves Colorado, Montana, North Dakota, South Dakota, Utah, and Wyoming. Dallas’ Office of Administration and Technology (ATO) “owns” the strategic planning process. ATO provides guidance to program staff with regard to approaching key activities and projects, measuring and monitoring performance, and achieving outcomes highlighted in the work plan. ATO also holds training and workshops throughout the year to help program units and staff understand regional goals and strategies. Two program offices in each region support and administer ACF grantees and programs in the Hub. Strategic planning in Dallas is a dynamic process that is sensitive to changing circumstances in the region and headquarters. The Hub work plan is developed via a strategic planning process dependent on top-down guidance from senior leadership and bottom-up input from staff. Aligning regional staffing responsibilities with the goals in the work plan has encouraged innovation among staff and clearer linkages between resources and results. The regional work plan and Dallas’ project management systems reinforce these linkages. Strategic planning in Dallas is an integrated and evolving process. When planning and implementing projects that support the work plan, Dallas leadership and staff continually reevaluate evolving priorities and circumstances affecting their work. As new needs, priorities, or project ideas surface during the year, key activities and associated resources are adjusted as necessary. ATO ensures that it is an inclusive process resulting in a work plan tied to ACF goals and priorities, built from the bottom up, and reflective of senior leadership’s guidance. The strategic planning process reflects program staff perspectives on the needs of grantees and the communities they serve as well as benefits from their first-hand knowledge of strategies that have been successful. ATO encourages collaboration in developing the work plan by sponsoring work sessions and staff meetings throughout the year. Managers told us that working on activities and projects that contribute to national goals has become important to staff over time. An inclusive strategic planning process appears to help maintain a focus on outcomes by making the resulting day- to-day activities meaningful for staff. To assist staff with strategic planning, ATO developed Managing for Results: A Guide for Strategic Management Within the West-Central Hub. The guide focuses on the critical elements of successful strategic planning, successful implementation of the strategic plan, and monitoring, evaluation, and review of the strategic plan. Figure 6 shows the key elements of Dallas’ strategic planning process. Rethinking staffing in two key ways has encouraged employee innovation and strengthened the Hub’s focus on results, according to managers. New “21st Century Employee” positions have multidisciplinary, broadly defined position descriptions that allow staff to carry out a variety of functions within and across programs. These new positions have provided staff with the opportunity to take a crosscutting view among programs and identify opportunities to form natural partnerships in order to achieve outcomes. Also, linking employees’ day-to-day activities to goals and priorities through such instruments as outcome-based employee performance contracts has brought encouraging results to the region. For example, managers report that employees have a stronger sense of contribution and responsibility towards achieving program goals. Focusing on outcomes rather than process and outputs creates an opportunity for individuals to exercise their creativity and run with new project ideas, and helps to hold staff accountable for results in the region. To help staff develop performance contracts clearly aligned with organizational goals, Dallas’ Employee Communication and Performance Management Team created a resource guide that provides results-based tools and techniques for developing performance contracts that align with organizational goals. Employee performance plans tie into the Regional Hub Administrator’s performance contract, which in turn is tied to the work plan. The guide helps employees distinguish between activities—the actions used to produce results—and accomplishments, which are the value-added results produced by the activities. The guide also illustrates how to measure and monitor performance and accomplishments included in performance contracts. Work plans and project management systems reinforce the link between resources (inputs) needed to complete projects (activities) aimed at achieving goals (outcomes). Annually, program offices create program plans on which the regional work plan is based. Built in a matrix format, the work plan reinforces the linkage between regional goals and objectives, including GPRA goals, with outcome measures, performance indicators, and the key activities necessary to achieve regional goals and objectives. Also, the key activities are linked to timelines and status indicators noting, for example, when an activity has been completed. Key activities in the program plans crosswalk to the work plan, which tends to contain more broadly defined regional-level activities. Program units request funding for projects that contribute to the activities in the regional work plan and program plans, thus completing resources-results linkage. Projects and their associated funding are tracked in Dallas’ project management system, the Results-Based Information Tracking System (RBITS). RBITS tracks how project funds are spent and also shows the connection between the project, regional goals, ACF goals, and HHS goals. RBITS is a real-time project management system to help staff better link resources with results throughout the year. RBITS tracks budgeted vs. actual spending, including remaining funds on a project basis. These data are used to “find” leftover funds that can be shifted from completed projects to new projects or priorities. Also, RBITS historical spending data can provide baseline information for projecting future project cost estimates. For example, the cost of a 6-month technical assistance project in Austin can be reasonably estimated with RBITS historical data. RBITS projects are coded in various ways (for example, by HHS goal, ACF goal, key priorities, staff person, date) allowing ATO to generate various reports from the database; RBITS reports are accessible to everyone working in the region. Figure 7 shows portions of the fiscal year 2002 West-Central Hub regional work plan (also called the key priorities matrix) and Dallas Office of Child Support Enforcement (OCSE) fiscal year 2002 program plan for the fatherhood and healthy marriage initiatives. The shaded “key task” in the program plan crosswalks to the shaded “key activity” in the regional work plan. The shaded area of the RBITS report in figure 8 on the adjacent page illustrates examples of projects that support the key tasks and activities described above, and shows how the Dallas Regional Office tracks spending associated with these activities. The Pacific Hub office, located in San Francisco, comprises the Administration for Children and Families’ (ACF) Region 9 (San Francisco) and Region 10 (Seattle). The Hub Director oversees overall Hub operations and is directly responsible for overseeing the Region 9 office. The Hub Director has no line authority over the Regional Administrator who runs the Seattle office. Thus, the Hub Director relies on cooperation with the other regional offices to effect change in Region 10. Region 9 is organized into three units: a Program Support Unit (PSU), a Self-Sufficiency Unit (SSU), and a Children and Youth Development Unit (CYDU). The Quality Assurance Team (QAT) in PSU coordinates the development of work plans, provides program technical support, and gives technical assistance to states on sampling plans and data validation. SSU and CYDU provide program and financial management services, technical administration, and technical assistance to states and grantees in the administration of the ACF grant programs for which states and grantees are responsible. Figure 9 shows the Region 9 organizational structure. Officials told us that the goal of strategic planning in the region is to create processes that link resources to results while engaging, informing, and educating staff as to the value of focusing on program outcomes. To this end, they have embarked on several efforts: (1) organizing staff into state teams to allow a more integrated approach to service delivery, (2) developing regional work plans that link activities to priorities and goals and focus on outcomes, and (3) issuing accomplishment reports linked to the regional work plan. They said that, as a result, Region 9 is poised to use strategic planning as a management tool to improve results and allocate resources. Region 9 officials told us that they have reorganized to create state teams— rather than program-focused teams—to allow better integration, more efficient use of resources, and better customer service. They said that these teams allow them to be more flexible and to more easily recognize and take advantage of the natural program linkages. In turn, staff can help grantees take advantage of these linkages in their own programs. For example, Region 9 staff helped Arizona use Head Start programs as vehicles to strengthen the role of young fathers in their children’s lives—something that would be traditionally viewed as a Child Support Enforcement (CSE) program goal. State-based teams allow Region 9 to shift its focus when agency priorities shift and focus on ACF priorities (e.g., strengthening marriage) rather than specific programs to meet the administration's vision for ACF. The reorganization also helped the office to continue providing service despite increases in workload and reductions in staffing levels at the regional office—from more than 100 in the early 1990s to approximately 65. Lastly, managers report that their staff are now able to provide a single point of contact for grantees in a state, which is particularly important for Indian tribes. Over time, Region 9 officials told us that they have tried to guide the work planning process and the work plans themselves to link more closely to the Government Performance and Results Act (GPRA), involve staff at all levels, and focus more directly on outcomes. Officials said that the early work plans (pre-fiscal year 1998) were simply a list of strategies to be achieved, organized by major ACF priorities. Recently, these work plans have become a way of showing how the region plans to allocate resources to specific activities to achieve GPRA goals and the Secretary’s crosscutting initiatives. The Hub more recently has also adopted this approach. The plans have also begun to include expected outcomes by which the Hub and region can measure the extent to which they have achieved their stated goals. Table 2 describes some key elements of the work planning process and work plans in fiscal years 1998 to 2002. A senior planning official described the following progression of San Francisco’s work plans and work planning process. Prior to fiscal year 1998, work planning in Region 9 consisted of individual, activity-based unit work plans. In fiscal year 1998, in an attempt to reduce the workload of program staff, they were not required to participate in creating work plans. Instead, QAT compiled a regional work plan and linked the activities to the seven key priorities ACF had at the time. The region found that, in addition to less accurately reflecting the region’s work, the centralized process weakened the staff’s connection between their work and program goals. Beginning in fiscal year 1999, work planning was turned back to the program units, but QAT provided a work plan template to help the units create more uniform plans focused more on outcomes rather than outputs. Managers said this cultural shift was one of the most important changes in the region. To help people understand and articulate the difference between program outputs and outcomes, QAT provided voluntary training sessions. Senior leadership views the staff’s ability to understand and articulate the difference between the two as a major breakthrough—one that was key to helping staff understand how their performance affects program performance and results, and an important step in holding people accountable for results. In fiscal year 2000, in keeping with the way headquarters program units and senior leadership plan and report, ACF required the regions to crosswalk their activities to ACF’s four GPRA goals. San Francisco was able to accomplish this because the seven key priorities—to which Region 9’s work plan was connected—clearly linked to the goals. Regional managers told us that this helped staff make the connection between their work and ACF’s larger GPRA goals in a way they had not been able to before. Also, on its own initiative, the Pacific Hub created a work plan (in addition to the regional work plan) to address crosscutting initiatives and to better leverage Hub resources. The region continued to strengthen its work plan in fiscal year 2001 by further developing an emphasis on outcomes, and by streamlining its work plans and reports. We observed that the fiscal year 2001 work plan also indicated, for each outcome, key activities to be completed by the region and by headquarters. Managers told us that they began to see staff change the way they thought about their work—the planning process was becoming more than just a process. Fiscal year 2002 was a transition year: ACF's new leadership created nine crosscutting priorities to which the work plans were to be linked. Region 9 included activities related to these priorities in its work planning. Managers view the crosscutting nature of the new priorities as another step forward in their previous efforts to design activities that use Hub resources rather than regional resources. For each activity, the 2002 Hub plan also began to flesh out costs, funding sources, and timelines for completion. Region 9 officials told us that accomplishment reports link to work plans and further involve staff in the strategic planning process, reminding staff of how their work relates to program outcomes and achieving agency goals. They said that initially the process included mostly upper management with varying involvement or participation from staff, but that staff has increasingly participated in reporting. For example, in past years, the Hub Director sent "accomplishment reports" to headquarters that summarized information on the achievements of the regional office. In fiscal year 2000, the regional work plan was amended to include a section for accomplishments specifically linked to strategic resources in the work plan, and staff responsible for the achievement kept the plan up-to-date. Similar to the work plans, Region 9’s accomplishment reports are organized by initiatives and goals, and have become less process-oriented and more outcome-oriented over time. For example, the fiscal year 1998 accomplishment report to headquarters, the region's first, reports on the activities performed, not the outcomes achieved by staff. The fiscal year 1999 accomplishment report began to focus on outcomes by using measures to quantify objectives. In fiscal year 2000, headquarters required that senior staff tie accomplishment reports to their own performance. Region 9 officials said that although accomplishment reports were not required for fiscal years 2001 and 2002, Region 9 provided them anyway and the Hub Director used that information to support her own fiscal year 2001 performance report; she is expected to do the same for fiscal year 2002. After 5 years of strategic planning efforts, Region 9 has progressed in institutionalizing the link between day-to-day activities and program outcomes. Under strong senior leadership, the region has begun to take the next step—using its work plan to manage more effectively. The Pacific Hub participated in OPRE training in April 2002 to learn how to use the annual GPRA plan as a performance management tool. Specifically, the training was meant to help staff use the performance plan to more effectively target training and technical assistance resources, provide a framework for aligning the administration's key priorities with its mission and goals, and provide opportunities for cross-program collaboration. To this end, OPRE focused on models for linking inputs, activities, outputs, and outcomes as a tool for the regions to develop their work plans. The planned agenda for an upcoming video conference includes developing models on how to achieve the results in their work plans. To address the objectives in this report, we asked the Administration for Children and Families (ACF) to identify several regional offices and programs that they felt best represented how managers used performance information to inform the resource allocation process. Using their suggestions as a guide, we then selected for inclusion in our study two regional offices (Region VI, Dallas, and Region IX, San Francisco) and three diverse programs (Head Start, Child Support Enforcement, and the Community Services Block Grant). Head Start, begun in 1965, is a $6.5 billion discretionary, federally administered categorical grant program the primary goal of which is to promote the school-readiness of children in low-income families. ACF administers the Head Start program through the Head Start Bureau and ACF’s regional offices nationwide. ACF awards grants directly to local agencies, which provide a wide range of program services—educational, medical, dental, nutrition, mental health, and social services—to low- income preschool children and their families. The approximately 1,400 service providers include public and private school systems, community action agencies and other private nonprofit organizations, local government agencies, and Indian tribes. The program supports ACF Goal 2: to improve healthy development, safety, and well-being of children and youth. The Child Support Enforcement (CSE) program was established in 1975 under Title IV-D of the Social Security Act. It is a mandatory federal program administered or managed by states, whose mission is to ensure that children are financially supported by both their parents. State and local governments work towards establishing paternity and support orders, locating parents, and enforcing support orders. The Office of Child Support Enforcement (OCSE) is responsible for overseeing the program, which includes providing support to states. The CSE program received almost $4 billion in funding for fiscal year 2002. Collections reached $18.9 billion in fiscal year 2001, but OCSE reported that about $89 billion in child support was legally owed but unpaid at the end of fiscal year 2000. The federal government and the states share both the administrative costs of operating the program and any recovered costs and fees at the rate of 66 percent federal and 34 percent state. The $4 billion in CSE funding includes a $450 million incentives program. The Child Support Performance and Incentive Act of 1998 changed the basis for awarding incentives from cost-efficiency to rewarding achievement of five performance-based outcome measures. In fiscal year 2000, one-third of the incentive payments awarded to those states that met the performance standards were based on the new formula and the remaining two-thirds were based on the old formula. The phase-in will be completed by fiscal year 2002. CSE supports ACF Goal 1: to increase economic independence and productivity for families. The Office of Community Services (OCS) provides support and assistance to states and grantees that provide a range of human and economic development services and activities at the state and local levels. Working through community action agencies (CAAs) and community development corporations, OCS programs seek to reduce poverty, revitalize low-income communities, and empower low-income individuals and families to become self-sufficient. The $650 million Community Services Block Grant is the primary community service program through which grantees receive OCS funds. To help focus on results, OCS relies on Results Oriented Management and Accountability (ROMA), a goal-oriented framework that binds and holds accountable CAAs in a standardized way while allowing them the flexibility to develop their own processes and outcomes consistent with local preferences and state objectives. We reviewed budget and planning documents for the programs and regions in our study, including strategic plans, annual performance plans, performance reports, budgets, and work plans. We also reviewed a variety of reports for general background information on (1) recent administration initiatives, (2) GPRA implemention, (3) recent public administration literature, and (4) GAO reports on prior case studies and general management reviews. We also obtained staff and management views on the challenges to further budget and performance integration. We conducted structured interviews with agency budget, program, and planning officials in each region and program we studied. We also interviewed departmental budget and planning staff with ACF oversight responsibilities. Among other things, we asked about (1) roles and responsibilities, (2) how performance information was used in program, resource, and staffing decisions, (3) how planning and budgeting were related, and (4) challenges they faced to further budget and performance integration. The following bureau, offices, regions, and programs were included in our review. The Department of Health and Human Services’ (HHS) Office of the Assistant Secretary for Budget, Technology and Finance, and the Office of the Assistant Secretary for Planning and Evaluation. ACF’s Office of Legislative Affairs and Budget, Office of Planning, Research, and Evaluation, and Office of Regional Operations. ACF’s Head Start Bureau (Head Start program); the Office of Community Services (Community Services Programs); and the Office of Child Support Enforcement (Child Support Enforcement program). ACF’s West-Central Hub, Dallas Regional Office in Texas, and the Pacific Hub, San Francisco Regional Office in California. Although we broadly summarize the views of these officials for reporting purposes, their observations may not necessarily be generalized across ACF. Regarding ACF’s responses about its specific budgeting and planning strategies and practices, where possible, we reviewed supporting documentation. However, we did not observe the actual implementation of these processes and therefore cannot independently verify that they function as indicated in the supporting documentation. We requested comments on a draft of this report from HHS and The Policy Exchange/ Institute for Educational Leadership. These comments are discussed in the letter. In addition, we provided drafts of the Dallas and San Francisco Regional Office appendixes to regional officials for technical review and have incorporated their comments where appropriate. We conducted our work from January through May 2002 in accordance with generally accepted government auditing standards. A glossary can be found at the end of this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
Encouraging a clearer and closer link between budgeting, planning, and performance is essential to improving federal management and instilling a greater focus on results. Through work at various levels within the organization, this report on the Administration for Children and Families (ACF)--and its two companion studies on the Nuclear Regulatory Commission (GAO-03-258) and the Veterans Health Administration (GAO-03-10)--records (1) what managers considered successful efforts at creating linkages between planning and performance information to influence resource choices and (2) the challenges managers face in creating these linkages. The Administration for Children and Families' (ACF) budget and performance planning processes are clearly related but are not fully integrated. Budget and planning align more closely after ACF sends the budget request and performance plan to the Department of Health and Human Services for review. Finally, unlike budget formulation, budget execution largely occurs in the regional offices, where resource allocation is often driven by program performance. Officials in ACF's Head Start, Child Support Enforcement, and Community Services Block Grant programs described three general ways in which decisions at the programmatic level are influenced by performance: (1) training and technical assistance money is often allocated based on needs and grantee performance, (2) partnerships and collaboration help ACF work with grantees towards common goals and further the administration's agenda, and (3) organizing and allocating regional staff around agency goals allow employees to link their day-to-day activities to longer-term results and outcomes. While ACF must overcome some difficult barriers to further budget and performance integration, it has begun to identify and implement mitigation strategies for some of these issues. For example, ACF conducts much of its work through nonfederal service providers, which often limits the extent to which ACF can influence national performance goals and can seriously complicate data collection. To address this, ACF has successfully collaborated with providers to develop national performance goals and build data collection capacity. This has also raised awareness of the importance of collecting and reporting performance data uniformly. Since ACF programs are often only part of a network of long-term federal, state, and local efforts to address serious health and social concerns, attributing a particular outcome to a particular program can be difficult. ACF has addressed this issue by using program evaluations to help isolate the effects of a particular program, strengthening the link between outputs and outcomes, and identifying intermediate outputs and outcomes to help measure program performance. The organizational culture change necessary to support the linkages between resources and results takes time, but change is beginning to take root. Some managers and staff reported a noticeable difference in the use and understanding of outcomes versus outputs, and outcome-based performance agreements for managers and staff are becoming more common.
Our work has identified the need for improvements in the federal government’s approach to cybersecurity of its systems and those supporting the nation’s critical infrastructures and in protecting the privacy of PII. While previous administrations and agencies have acted to improve the protections over the information and information systems supporting federal operations and U.S. critical infrastructure, additional actions are needed. Federal agencies need to effectively implement risk-based entity- wide information security programs consistently over time. Since the first FISMA was enacted in 2002, agencies have been challenged to fully and effectively develop, document, and implement the entity-wide information security program required by FISMA to protect the information and information systems that support their operations and assets, including those provided or managed by another agency or contractor. For example, as of February 7, 2017, 19 of 23 federal agencies covered by the Chief Financial Officers Act (CFO Act) that had issued their required annual financial reports for fiscal year 2016 reported that information security control deficiencies were either a material weakness or significant deficiency in internal controls over financial reporting for fiscal year 2016. In addition, inspectors general at 20 of the 23 agencies identified information security as a major management challenge for their agencies. Further, in light of these challenges, we have identified a number of actions to assist agencies in implementing their information security programs. Enhance capabilities to effectively identify cyber threats to agency systems and information. A key activity for assessing cybersecurity risk and selecting appropriate mitigating controls is the identification of cyber threats to computer networks, systems, and information. In 2016, we reported on several factors that agencies identified as impairing their ability to identify these threats to a great or moderate extent. The impairments included an inability to recruit and retain personnel with the appropriate skills, rapidly changing threats, continuous changes in technology, and a lack of government-wide information sharing mechanisms. Addressing these impairments will enhance the ability of agencies to identify the threats to their systems and information and be in a better position to select and implement appropriate countermeasures. Implement sustainable processes for securely configuring operating systems, applications, workstations, servers, and network devices. We routinely determine that agencies do not enable key information security capabilities of their operating systems, applications, workstations, servers, and network devices. Agencies were not always aware of the insecure settings that introduced risk to the computing environment. Establishing strong configuration standards and implementing sustainable processes for monitoring and enabling configuration settings will strengthen the security posture of federal agencies. Patch vulnerable systems and replace unsupported software. Federal agencies consistently fail to apply critical security patches on their systems in a timely manner, sometimes doing so years after the patch becomes available. We also consistently identify instances where agencies use software that is no longer supported by their vendors. These shortcomings often place agency systems and information at significant risk of compromise, since many successful cyberattacks exploit known vulnerabilities associated with software products. Using vendor-supported and patched software will help to reduce this risk. Develop comprehensive security test and evaluation procedures and conduct examinations on a regular and recurring basis. Federal agencies we reviewed often did not test or evaluate their information security controls in a comprehensive manner. The evaluations were sometimes based on interviews and document reviews, limited in scope, and did not identify many of the security vulnerabilities that our examinations identified. Conducting in-depth security evaluations that examine the effectiveness of security processes and technical controls is essential for effectively identifying system vulnerabilities that place agency systems and information at risk. Strengthen oversight of contractors providing IT services. As demonstrated by the OPM data breach of 2015, cyber attackers can sometimes gain entry to agency systems and information through the agency’s contractors or business partners. Accordingly, agencies need to assure that their contractors and partners are adequately protecting the agency’s information and systems. In August 2014, we reported that five of six selected agencies were inconsistent in overseeing the execution and review of security assessments that were intended to determine the effectiveness of contractor implementation of security controls, resulting in security lapses. In 2016, agency chief information security officers (CISOs) we surveyed reported that they were challenged to a large or moderate extent in overseeing their IT contractors and receiving security data from the contractors. This challenge diminished their ability to assess how well agency information maintained by the contractors is protected. Effectively overseeing and reviewing the security controls implemented by contractors and other parties is essential to ensuring that the agency’s information is properly safeguarded. We have several ongoing and planned audit engagements that will continue to assess the effectiveness of agency actions to implement information security programs. These engagements include in-depth assessments of information security programs at individual agencies including OPM and the Centers for Disease Control and Prevention as well as our biennial review of the adequacy of agencies’ information security policies and practices and their compliance with the provisions of FISMA. Also, on an annual basis, we evaluate information security controls over financial systems and information at seven agencies and incorporate the audit results of agency offices of inspector general during our annual audit of the consolidated financial statements of the federal government. In addition, we are currently conducting an assessment of the Federal Risk Authorization and Management Program and have plans to review cyber risk management practices and continuous monitoring programs at federal agencies. The federal government needs to improve its cyber incident detection, response, and mitigation capabilities. Even agencies or organizations with strong security can fall victim to information security incidents due to the existence of previously unknown vulnerabilities that are exploited by attackers to intrude into an agency’s information systems. Accordingly, agencies need to have effective mechanisms for detecting, responding to, and recovering from such incidents. We have previously identified various actions that could assist the federal government in building its capabilities for detecting, responding to, and recovering from security incidents. Expand capabilities, improve planning, and support wider adoption of the government-wide intrusion detection and prevention system. In January 2016, we reported that DHS’s National Cybersecurity Protection System (NCPS) had limited capabilities for detecting and preventing intrusions, conducting analytics, and sharing information. In addition, adoption of these capabilities at federal agencies was limited. We noted that expanding NCPS’s capabilities for detecting and preventing malicious traffic, defining requirements for future capabilities, and developing network routing guidance could increase assurance of the system’s effectiveness in detecting and preventing computer intrusions and support wider adoption by agencies. Improve cyber incident response practices at federal agencies. In April 2014 we reported that 24 major federal agencies did not consistently demonstrate that they had effectively responded to cyber incidents. For example, six agencies reviewed had not determined the impact of incidents or taken actions to address the underlying control weaknesses that allowed the incidents to occur, in part because they had not developed comprehensive policies, plans, and procedures for responding to security incidents, and had not tested their incident response capabilities. By developing comprehensive incident response policies, plans, and procedures for responding to incidents and effectively overseeing response activities, agencies will have increased assurance that they will effectively respond to cyber incidents. Update federal guidance on reporting data breaches and develop consistent responses to breaches of PII. As we reported in December 2013, eight agencies that we reviewed did not consistently implement policies and procedures for responding to breaches of PII. For example, none of the agencies had documented the evaluation of incidents and lessons learned. In addition, we noted that OMB guidance calling for agencies to report each PII-related incident— even those with inherently low risk to the individuals affected—within 1 hour of discovery may cause agencies to expend resources to meet reporting requirements that provide little value and divert time and attention from responding to breaches. We recommended that OMB update it guidance on federal agencies’ responses to a PII-related data breach and that the agencies we reviewed take steps to improve their response to data breaches involving PII. Updating guidance and consistently implementing breach response practices will improve the effectiveness of governmentwide and agency data breach response programs. GAO routinely evaluates agencies’ intrusion detection, response, and mitigation activities during audits of agency information security controls and programs. We plan to continue to do so during ongoing and future engagements. In addition, the Cybersecurity Act of 2015 contains a provision for us to study and publish a report by December 2018 on the effectiveness of the approach and strategy of the federal government to secure agency information systems, including the intrusion detection and prevention capabilities and the government’s intrusion assessment plan. The federal government needs to expand its cyber workforce planning and training efforts. Ensuring that the government has a sufficient number of cybersecurity professionals with the right skills and that its overall workforce is aware of information security responsibilities remains an ongoing challenge. Enhance efforts for recruiting and retaining a qualified cybersecurity workforce. This has been a long-standing dilemma for the federal government. In 2013, agency chief information officers and experts we surveyed cited weaknesses in education, awareness, and workforce planning as a root cause in hindering improvements in the nation’s cybersecurity posture. Several experts also noted that the cybersecurity workforce was inadequate, both in numbers and training. They cited challenges such as the lack of role-based qualification standards and difficulties in retaining cyber professionals. In 2016, agency chief information security officers we surveyed cited difficulties related to having sufficient staff; recruiting, hiring, and retaining security personnel; and ensuring that security personnel have appropriate skills and expertise as posing challenges to their abilities to carry out their responsibilities effectively. Improve cybersecurity workforce planning activities at federal agencies. In November 2011, we reported that only five of eight selected agencies had developed workforce plans that addressed cybersecurity. Further, all eight agencies reported challenges with filling cybersecurity positions, and only three of the eight had a department-wide training program for their cybersecurity workforce. GAO has two current engagements to further review cybersecurity workforce issues in the federal government. The Homeland Security Cybersecurity Workforce Assessment Act of 2014 contains a provision for us to monitor, analyze, and report by December 2017 on the Department of Homeland Security’s implementation of the National Cybersecurity Workforce Measurement Initiative. In addition, the Cybersecurity Act of 2015 calls for us to monitor, analyze, and submit a report by December 2018 on the implementation of this initiative and the identification of cyber-related work roles of critical need by federal agencies. The federal government needs to expand efforts to strengthen cybersecurity of the nation’s critical infrastructures. U.S. critical infrastructures such as financial institutions, energy production and transmission facilities, and communications networks, are vital to the U.S. security, economy, and public health and safety. Similar to federal systems, the systems supporting critical infrastructures face an evolving array of cyber-based threats. To help secure infrastructure cyber assets— most of which is owned and operated by the private sector—federal policy and the National Infrastructure Protection Plan provide for a public- private partnership in which federal agencies support or assist their private sector partners in securing systems supporting critical infrastructure. We have identified the following actions that can assist agencies in performing these vital services. Develop metrics to assess the effectiveness of efforts promoting the NIST cybersecurity framework. In December 2015, we reported that NIST and other agencies had promoted the adoption of the Framework for Improving Critical Infrastructure Cybersecurity to critical infrastructure owners and operators and other organizations. Toward this end, DHS established the Critical Infrastructure Cyber Community Voluntary Program to encourage entities to adopt the framework. However, DHS had not developed metrics to measure the success of its activities and programs. In addition, DHS and the General Services Administration had not determined whether to develop tailored guidance for implementing the framework in government facilities sector as other agencies had done for their respective sectors. DHS concurred with our recommendation to develop metrics, but has not indicated that it has taken action, and DHS and the General Services Administration concurred with our recommendation to determine whether tailored guidance was needed. Develop metrics to measure and report on the effectiveness of cyber risk mitigation activities and the cybersecurity posture of critical infrastructure sectors. In November 2015, we reported that all eight sector-specific agencies reviewed had determined the significance of cyber risk to the nation’s critical infrastructures and had taken actions to mitigate cyber risks and vulnerabilities for their respective sectors. However, not all sector-specific agencies had metrics to measure and report on the effectiveness of all their activities to mitigate cyber risks or their sectors’ cybersecurity posture. We recommended that agencies lacking metrics develop them and determine how to overcome any challenges to reporting the results of their activities to mitigate cyber risks. Four of the agencies explicitly agreed with our recommendations and identified planned or on-going efforts to implement performance metrics; however, they have not yet provided metrics or reports of outcomes. GAO has several ongoing and planned engagements that will touch on the cybersecurity of national critical infrastructures. Among these engagements, our study of the “Internet of things” addresses the security and privacy implications of this phenomenon. In addition, the Cybersecurity Enhancement Act of 2014 contains a provision for us to assess the extent to which critical infrastructure sectors have adopted a voluntary cybersecurity framework to reduce cyber risks and the success of such a framework for protecting critical infrastructure against cyber threats. We also plan to review the cybersecurity of oil and gas pipeline control systems and the Department of Homeland Security’s efforts to share cyber information with federal and non-federal entities. The federal government needs to better oversee protection of PII. Regarding PII, advancements in technology, such as new search technology and data analytics software for searching and collecting information, have made it easier for individuals and organizations to correlate data and track it across large and numerous databases. In addition, lower data storage costs have made it less expensive to store vast amounts of data. Also, ubiquitous Internet and cellular connectivity make it easier to track individuals by allowing easy access to information pinpointing their locations. These advances—combined with the increasing sophistication of hackers and others with malicious intent, and the extent to which both federal agencies and private companies collect sensitive information about individuals—have increased the risk of PII being exposed and compromised. Our work has identified the following actions that need to be taken to better protect the privacy of personal information. Protect the security and privacy of electronic health information. In August 2016, we reported that guidance for securing electronic health information issued by Department of Health and Human Services (HHS) did not address all key controls called for by other federal cybersecurity guidance. In addition, this department’s oversight efforts did not always offer pertinent technical guidance and did not always follow up on corrective actions when investigative cases were closed. HHS generally concurred with the five recommendations we made and stated that it would take actions to implement them. Ensure privacy when face recognition systems are used. In May 2016, we reported that the Department of Justice had not been timely in publishing and updating privacy documentation for the Federal Bureau of Investigation’s (FBI) use of face recognition technology. Publishing such documents in a timely manner would better assure the public that the FBI is evaluating risks to privacy when implementing systems. Also, the FBI had taken limited steps to determine whether the face recognition system it was using was sufficiently accurate. We recommended that the department ensure required privacy-related documents are published and that the FBI test and review face recognition systems to ensure that they are sufficiently accurate. Of the six recommendations we made, the Department of Justice agreed with one, partially agreed with two, and disagreed with three. The agency has not yet provided information about the actions it has taken to address the recommendations. Protect the privacy of users’ data on state-based marketplaces. In March 2016, we reported on weaknesses in technical controls for the “data hub” that the Centers for Medicare and Medicaid Services (CMS) uses to exchange information between its health insurance marketplace and external partners. We also identified significant weaknesses in the controls in place at three selected state-based marketplaces established to carry out provisions of the Patient Protection and Affordable Care Act. We recommended that CMS define procedures for overseeing the security of state-based marketplaces and require continuous monitoring of state marketplace controls. HHS concurred with our recommendations and stated it has taken or plans to take actions to address these recommendations. GAO has several ongoing and planned reviews that address actions intended to protect the privacy of PII. For example, we are assessing agency efforts and government-wide initiatives to reduce or eliminate the use of Social Security numbers. In addition, the Cybersecurity Act of 2015 calls for us to review and report by December 2018 on agency policies and actions taken by the federal government to remove PII from shared cyber threat indicators or defensive measures. Further, the 21st Century Cures Act of 2016 requires us to review and report by December 2018 on the policies and activities of the Office of the National Coordinator for Health Information Technology to ensure appropriate matching to protect patient privacy and security with respect to electronic health records. Recent reports by the Cybersecurity Commission and CSIS identify topical areas and numerous recommendations for the new administration to consider as it develops and implements cybersecurity strategy and policy. In its study, the Commission focused on 10 cybersecurity topics including international issues, critical infrastructure, cybersecurity research and development, cybersecurity workforce, and the Internet of Things. CSIS addressed similar topics and identified five major issues related to international strategy, securing government agencies and critical infrastructure, cybersecurity research and workforce development, cybercrime, and defending cyberspace. Over the last several years, GAO has reviewed many of the areas covered by the Commission and CSIS reports. Our conclusions and recommendations are generally directed to specific agencies and may be more limited in scope than the recommendations of the Commission and CSIS. Nevertheless, several of our recommendations are generally consistent with or similar to recommendations made by the Commission and CSIS in the following areas: International cybersecurity strategy. In July 2010, we identified a number of challenges confronting U.S. involvement in global cybersecurity and governance. These include developing a comprehensive national strategy; ensuring international standards and policies do not pose unnecessary barriers to U.S. trade; participating in international cyber-incident response and appropriately sharing information without jeopardizing national security; investigating and prosecuting transnational cybercrime; and contending with differing laws and norms of behavior. We made five recommendations to the administration’s cybersecurity coordinator to address these challenges, to include developing a comprehensive national global cyberspace strategy and defining cyberspace norms. In their recent reports, the Commission and CSIS also identified actions for enhancing international cybersecurity strategy and policies and agreeing on norms of behavior with like-minded nations. Protecting cyber critical infrastructure. In November 2015, we reported that sector specific agencies—federal agencies that are responsible for collaborating with their private sector counterparts in their assigned critical infrastructure sectors—were acting to address sector cyber risk by sharing information, supporting incident response activities, and providing technical assistance. However, they had not developed metrics to measure and improve the effectiveness of their cyber risk mitigation activities or their sectors’ cybersecurity posture. We recommended that the agencies develop performance metrics to monitor and improve the effectiveness of their cyber risk mitigation activities. In their recent reports, the Commission and CSIS also identified actions for enhancing the public-private partnership, including improving information sharing, incident response capabilities, and cyber risk management practices. Promoting Use of the NIST Cybersecurity Framework. In December 2015, we reported that NIST had developed a set of voluntary standards and procedures for enhancing cybersecurity of critical infrastructure, known as the Framework for Improving Critical Infrastructure Cybersecurity. We also reported that although DHS had established a program dedicated to encouraging the framework’s adoption, it had not established metrics to assess the effectiveness of these efforts. We recommended that DHS develop metrics for measuring the effectiveness of efforts to promote and support the framework. Similarly, both the Commission and CSIS have recommended actions to promote and measure use of the framework. Prioritizing cybersecurity research and development (R&D). In June 2010, we reported that the federal government lacked a prioritized national R&D agenda and a data repository to track research and development projects and funding, as required by law. We recommended that the Office of Science and Technology Policy (OSTP) take several steps, including developing a comprehensive national R&D agenda that identifies priorities for short-term, mid-term, and long-term complex R&D projects and is guided by input from the public and private sectors. Similarly, in its report, the Commission stated that OSTP, as part of an overall R&D agenda, should lead the development of an integrated government-private-sector cybersecurity roadmap for developing defensible systems. Expanding cybersecurity workforce capabilities. As discussed earlier in this statement, we have reported that ensuring that the government has a sufficient number of cybersecurity professionals with the right skills and that its overall workforce is aware of information security responsibilities remains an ongoing challenge. Consistent with this view, the Commission and CSIS have identified actions to address improving the nation’s cybersecurity workforce, including increasing the number of cybersecurity practitioners; implementing a range of education and training programs at the federal, state, and local levels; providing incentives for individuals to enter the workforce; and allocating additional funds at key departments for cybersecurity education and training programs. Combatting cybercrime. In June 2007, we identified a number of challenges impeding public and private entities efforts in mitigating cybercrime, including working in a borderless environment with laws of multiple jurisdictions. We stated that efforts to investigate and prosecute cybercrime are complicated by the multiplicity of laws and procedures that govern in the various nations and states where victims may be found, and the conflicting priorities and varying degrees of expertise of law enforcement authorities in those jurisdictions. In addition, laws used to address cybercrime differ across states and nations. For example, an act that is illegal in the United States may be legal in another nation or not directly addressed in the other nation’s laws. Developing countries, for example, may lack cybercrime laws and enforcement procedures. In its recent report, CSIS stated that many countries still do not have adequate cybercrime laws and recommended that (1) countries that refuse to cooperate with law enforcement should be penalized in some way and (2) methods be found to address the concerns of countries not willing to sign an existing treaty addressing cybercrime. In summary, the dependence of the federal government and the nation’s critical infrastructure on computerized information systems and electronic data makes them potentially vulnerable to a wide and evolving array of cyber-based threats. Securing these systems and data is vital to the nation’s security, prosperity, and well-being. Nevertheless, the security over these systems is inconsistent and additional actions are needed to address ongoing cybersecurity and privacy challenges. Specifically, federal agencies need to address control deficiencies and fully implement organization-wide information security programs, cyber incident response and mitigation efforts needs to be improved across the government, maintaining a qualified cybersecurity workforce needs to be a priority, efforts to bolster the cybersecurity of the nation’s critical infrastructure needs to be strengthened, and the privacy of PII needs to be better protected. Several recommendations made by the Commission and CSIS are generally consistent with previous recommendations made by GAO and warrant close consideration. Chairwoman Comstock, Ranking Member Lipinski, and Members of the Subcommittee, this concludes my statement. I would be happy to respond to your questions. If you or your staff have any questions about this testimony, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Michael Gilmore, Nancy Glover, and Kush Malhotra. GAO, Cybersecurity: DHS’s National Integration Center Generally Performs Required Functions but Needs to Evaluate Its Activities More Completely, GAO-17-163 (Washington, D.C.: Feb. 1, 2017). GAO, Federal Information Security: Actions Needed to Address Challenges, GAO-16-885T (Washington, D.C.: Sept. 19, 2016). GAO, Federal Chief Information Security Officers: Opportunities Exist to Improve Roles and Address Challenges to Authority, GAO-16-686. (Washington, D.C.: Aug. 26, 2016). GAO, Electronic Health Information: HHS Needs to Strengthen Security and Privacy Guidance and Oversight, GAO-16-771 (Washington, D.C.: Aug. 26, 2016). GAO, Face Recognition Technology: FBI Should Better Ensure Privacy and Accuracy, GAO-16-267 (Washington, D.C.: May 16, 2016) (Reissued August 3, 2016). GAO, Information Security: Agencies Need to Improve Controls over Selected High-Impact Systems, GAO-16-501 (Washington, D.C.: May 18, 2016). GAO, Healthcare.gov: Actions Needed to Enhance Information Security and Privacy Controls, GAO-16-265 (Washington, D.C.: Mar. 23, 2016). GAO, Information Security: DHS Needs to Enhance Capabilities, Improve Planning, and Support Greater Adoption of Its National Cybersecurity Protection System, GAO-16-294 (Washington, D.C.: Jan. 28, 2016). GAO, Critical Infrastructure Protection: Measures Needed to Assess Agencies’ Promotion of the Cybersecurity Framework, GAO-16-152 (Washington, D.C.: Dec. 17, 2015). GAO, Critical Infrastructure Protection: Sector-Specific Agencies Need to Better Measure Cybersecurity Progress, GAO-16-79, (Washington, D.C.: Nov. 19, 2015). GAO, Cybersecurity: National Strategy, Roles, and Responsibilities Need to Be Better Defined and More Effectively Implemented, GAO-13-187 (Washington, D.C.: Feb. 14, 2013). GAO, Cybersecurity Human Capital: Initiatives Need Better Planning and Coordination, GAO-12-8 (Washington, D.C.: Nov. 29, 2011). GAO, Cyberspace: United States Faces Challenges in Addressing Global Cybersecurity and Governance, GAO-10-606 (Washington, D.C.: July 2, 2010). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Cyber-based intrusions and attacks on federal systems and systems supporting our nation's critical infrastructure, such as communications and financial services, are evolving and becoming more sophisticated. GAO first designated information security as a government-wide high-risk area in 1997. This was expanded to include the protection of cyber critical infrastructure in 2003 and protecting the privacy of personally identifiable information in 2015. This statement (1) provides an overview of GAO's work related to cybersecurity of the federal government and the nation's critical infrastructure and (2) identifies areas of consistency between GAO recommendations and those recently made by the Cybersecurity Commission and CSIS. In preparing this statement, GAO relied on previously published work and its review of the two recent reports issued by the Commission and CSIS. GAO has consistently identified shortcomings in the federal government's approach to ensuring the security of federal information systems and cyber critical infrastructure as well as its approach to protecting the privacy of personally identifiable information (PII). While previous administrations and agencies have acted to improve the protections over federal and critical infrastructure information and information systems, the federal government needs to take the following actions to strengthen U.S. cybersecurity: Effectively implement risk-based entity-wide information security programs consistently over time . Among other things, agencies need to (1) implement sustainable processes for securely configuring operating systems, applications, workstations, servers, and network devices; (2) patch vulnerable systems and replace unsupported software; (3) develop comprehensive security test and evaluation procedures and conduct examinations on a regular and recurring basis; and (4) strengthen oversight of contractors providing IT services. Improve its cyber incident detection, response, and mitigation capabilities . The Department of Homeland Security needs to expand the capabilities and support wider adoption of its government-wide intrusion detection and prevention system. In addition, the federal government needs to improve cyber incident response practices, update guidance on reporting data breaches, and develop consistent responses to breaches of PII. Expand its cyber workforce planning and training efforts . The federal government needs to (1) enhance efforts for recruiting and retaining a qualified cybersecurity workforce and (2) improve cybersecurity workforce planning activities. Expand efforts to strengthen cybersecurity of the nation's critical infrastructures . The federal government needs to develop metrics to (1) assess the effectiveness of efforts promoting the National Institute of Standards and Technology's (NIST) Framework for Improving Critical Infrastructure Cybersecurity and (2) measure and report on effectiveness of cyber risk mitigation activities and the cybersecurity posture of critical infrastructure sectors. Better oversee protection of personally identifiable information . The federal government needs to (1) protect the security and privacy of electronic health information, (2) ensure privacy when face recognition systems are used, and (3) protect the privacy of users' data on state-based health insurance marketplaces. Several recommendations made by the Commission on Enhancing National Cybersecurity (Cybersecurity Commission) and the Center for Strategic & International Studies (CSIS) are generally consistent with or similar to GAO's recommendations in several areas including: establishing an international cybersecurity strategy, protecting cyber critical infrastructure, promoting use of the NIST cybersecurity framework, prioritizing cybersecurity research, and expanding cybersecurity workforces. Over the past several years, GAO has made about 2,500 recommendations to federal agencies to enhance their information security programs and controls. As of February 2017, about 1,000 recommendations had not been implemented.
The use of information technology (IT) to electronically collect, store, retrieve, and transfer clinical, administrative, and financial health information has great potential to help improve the quality and efficiency of health care and is important to improving the performance of the U.S. health care system. Historically, patient health information has been scattered across paper records kept by many different caregivers in many different locations, making it difficult for a clinician to access all of a patient’s health information at the time of care. Lacking access to these critical data, a clinician may be challenged to make the most informed decisions on treatment options, potentially putting the patient’s health at greater risk. The use of electronic health records can help provide this access and improve clinical decisions. Electronic health records are particularly crucial for optimizing the health care provided to military personnel and veterans. While in military status and later as veterans, many DOD and VA patients tend to be highly mobile and may have health records residing at multiple medical facilities within and outside the United States. Making such records electronic can help ensure that complete health care information is available for most military service members and veterans at the time and place of care, no matter where it originates. Key to making health care information electronically available is interoperability—that is, the ability to share data among health care providers. Interoperability enables different information systems or components to exchange information and to use the information that has been exchanged. This capability is important because it allows patients’ electronic health information to move with them from provider to provider, regardless of where the information originated. If electronic health records conform to interoperability standards, they can be created, managed, and consulted by authorized clinicians and staff across more than one health care organization, thus providing patients and their caregivers the necessary information required for optimal care. (Paper- based health records—if available—also provide necessary information, but unlike electronic health records, paper records do not provide decision support capabilities, such as automatic alerts about a particular patient’s health, or other advantages of automation.) Interoperability can be achieved at different levels. At the highest level, electronic data are computable (that is, in a format that a computer can understand and act on to, for example, provide alerts to clinicians on drug allergies). At a lower level, electronic data are structured and viewable, but not computable. The value of data at this level is that they are structured so that data of interest to users are easier to find. At still a lower level, electronic data are unstructured and viewable, but not computable. With unstructured electronic data, a user would have to find needed or relevant information by searching uncategorized data. Beyond these, paper records also can be considered interoperable (at the lowest level) because they allow data to be shared, read, and interpreted by human beings. Figure 1 shows the distinctions between the various levels of interoperability and examples of the types of data that can be shared at each level. According to DOD and VA officials, not all data require the same level of interoperability. For example, in their initial efforts to implement computable data, DOD and VA focused on outpatient pharmacy and drug allergy data because clinicians gave priority to the need for automated alerts to help medical personnel avoid administering inappropriate drugs to patients. On the other hand, for such narrative data as clinical notes, unstructured, viewable data may be sufficient. Achieving even a minimal level of electronic interoperability is valuable for potentially making all relevant information available to clinicians. Interoperability depends on adherence to common standards to promote the exchange of health information between participating agencies and with nonfederal entities in supporting quality and efficient health care. In the health IT field, standards govern areas ranging from technical issues, such as file types and interchange systems, to content issues, such as medical terminology. Developing, coordinating, and agreeing on standards are only part of the processes involved in achieving interoperability for electronic health record systems or capabilities. In addition, specifications are needed for implementing the standards, as well as criteria and a process for verifying compliance with the standards. In April 2004, the President called for widespread adoption of interoperable electronic health records by 2014. Health Information established the Office of the National Coordinator for Technology within the Department of Health and Human Services (HHS). This office has been tasked to, among other things, develop, ma direct the implementation of a strategic plan to guide the nationwide implementation of interoperable health IT in both the public and private health care sectors. Under the direction of HHS (through the Office of the National Coordinator), three primary organizations were designated to play major roles in expanding the implementation of health IT: The American Health Information Community was created by the Secretary of HHS as a federal advisory body to make recommendatio on how to accelerate the development and adoption of healt including advancing interoperability, identifying health IT standards, advancing a nationwide health information exchange, and protecti ng personal health information. Formed in September 2005, the community is made up of representatives from both the public and private sectors, including high-level DOD and VA officials. The community determines specific health care areas of high priority and develops “use cases” for these areas, which provide the cont ext in which standards would be applicable. The use cases convey how he care professionals would use such records and what standards w apply. Executive Order 13335, Incentives for the Use of Health Information Technology and Establishing the Position of the National Health Information Technology Coordinator (Washington, D.C.: Apr. 27, 2004). The Healthcare Information Technology Standards Panel, sponsored byerican National Standards Institute and funded by the Office of the Am the National Coordinator, was established in October 2005 as a public- private partnership to identify competing standards for the use cases being developed by the American Health Information Community and to “harmonize” the standards. The panel also develops the interoperability specifications that are needed for implementing the standards. Interoperability specifications were developed fo the seven use cases developed by the American Health Information Community in 2006 and 2007. The community also developed six use cases for 2008. The Healthcare Information Technology Standards Panel is made up of representatives from both the public and private sectors, including DOD and VA officials who serve as members and a actively working on several committees and groups within the panel. DOD and VA have been working to exchange patient health data electronically since 1998. As we have previously noted, their effo included both short-term initiatives to share information in existing (legacy) systems, as well as a long-term initiative to develop moderni zed health information systems—replacing their legacy systems—that would be able to share data and, ultimately, use interoperable electronic health records. In their short-term initiatives to share information from existing systems, the departments began from different positions. VA has one integrated medical information system—the Veterans Health Information Systemsand Technology Architecture (VistA)—which uses all electronic records and was developed in-house by VA clinicians and IT personnel. All VA medical facilities have access to all VistA information. In contrast, DOD uses multiple legacy medical information systems, all of which are commercial software products that are customized for specific uses. For example, the Composite Health Care System (CHCS), which was formerly DOD’s primary health information system, is still in use to capture pharmacy, radiology, and laboratory information. In addition, the Clinical Information System (CIS), a commercial health information system customized for DOD, is used to support inpatient treatment at military medical facilities. The departments’ short-term initiatives to share information in their existing systems have included the following projects: The Federal Health Information Exchange (FHIE), completed in 2004, enables DOD to electronically transfer service members’ electronic health information to VA when the members leave active duty. he Bidirectional Health Information Exchange (BHIE), also T established in 2004, was aimed at allowing clinicians at both departments viewable access to records on shared patients (t those who receive care from both departments—for example, vetera ns may receive outpatient care from VA clinicians and be hospitalized at a hat is, military treatment facility). The interface also allows DOD sites to see previously inaccessible data at other DOD sites. As part of the long-term initiative, each of the departments aims to develop a modernized system in the context of a common health information architecture that would allow a two-way exchange of health information. The common architecture is to include standardized, computable data; communications; security; and high-performance health information systems: DOD’s Armed Forces Health Longitudinal Technology Application (AHLTA) and VA’s HealtheVet. The departments’ modernized systems are to store information (in standardized, computable form) in separate data repositories: DOD’s Clinical Data Repository (CDR) and VA’s Health Data Repository (HDR). For the two-way exchange of health information, in September 2006 the departments implemented an interface named CHDR, to link the two repositories. Beyond these initiatives, in January 2007, the departments announced their intention to jointly determine an approach for inpatient health records. On July 31, 2007, they awarded a contract for a feasibility study and exploration of alternatives. In December 2008, the contractor provide the departments with a recommended strategy for jointly developing an inpatient solution. In reporting on the departments’ progress toward developing fully interoperable electronic health records in July 2008, we highlighted several findings: DOD and VA had established and implemented mechanisms to achieve sharing of electronic health information at different levels of interoperability. As of June 2008, pharmacy and drug allergy data on about 18,300 shared patients were being exchanged at the highest level of interoperability—that is, in computable form, a standardized format that a computer application can act on (for example, to provide alerts to clinicians of drug allergies). Viewable data also were being shared including, among other types, outpatient pharmacy data, allergy information, procedures, problem lists, vital signs, microbiology results, cytology reports, and chemistry and hematology reports. However, the departments were not sharing all electronic health data, including for example, immunization records and history, data on exposure to health hazards, and psychological health treatment and care records. Finally, although VA’s health information was all captured electronically, not all health data collected by DOD were electronic—many DOD medical facilities used paper-based health records. DOD and VA were participating in a number of initiatives led by the Office of the National Coordinator for Health Information Technology (within HHS), aimed at promoting the adoption of federal standards and broader use of electronic health records. The involvement of the departments in these initiatives was an important mechanism for aligning their electronic health records with emerging standards. The departments also had jointly published a common (agreed to) set of interoperability standards called the Target DOD/VA Health Standards Profile. Updated annually, the profile was used for reviewing joint DOD/VA initiatives to ensure standards compliance. The departments anticipate such updates and revisions to the profile as additional federal standards emerge and are recognized and accepted by HHS. In addition, according to DOD officials, the department was taking steps to ensure that its modernized health information system, AHLTA, was compliant with standards by arranging for certification through the Certification Commission for Healthcare Information Technology. Specifically, version 3.3 of AHLTA was conditionally certified in April 2007 against 2006 outpatient electronic health record criteria established by the commission. DOD officials stated that AHLTA version 3.3 was installed at three DOD locations. The departments’ efforts to set up the DOD/VA Interagency Program Office were still in their early stages. Leadership positions in the office had not been permanently filled, staffing was not complete, and facilities to house the office had not been designated. According to the Acting Director, DOD and VA had begun developing a charter for the office, but had not yet completed the document. Further, the implementation plan was in draft, and although it included schedules, milestones for several activities were not determined (such as implementing a capability to share immunization records), even though all capabilities were to be achieved by September 2009. We pointed out that without a fully established program office and a finalized implementation plan with set milestones, the departments might be challenged in meeting the September 2009 date for achieving interoperable electronic health records and capabilities. As a result, we recommended that the Secretaries of Defense and Veterans Affairs give priority to fully establishing the interagency program office and finalizing the draft implementation plan. Both DOD and VA agreed with these recommendations. Since our July 2008 report and September 2008 testimony, DOD and VA have continued to make progress toward increased interoperability through ongoing initiatives and activities documented in their plans related to increasing information sharing efforts. Also, the departments recently expanded the number of standards and specifications with which they expect their interoperability initiatives will comply. However, the departments’ plans lack results-oriented (i.e., objective, quantifiable, and measurable) performance goals and measures that are characteristic of effective planning. As a result, the extent to which the departments’ progress can be assessed and reported in terms of results achieved is largely limited to reporting on activities completed and increases in interoperability over time. Consequently, it is unclear what health information sharing capabilities will be delivered by September 2009. With regard to their ongoing initiatives, DOD and VA reported increases in data exchanged between the departments for their long-term initiative (CHDR) and their short-term initiative (BHIE). For example, between June and October 2008, the departments increased the number of shared patients for which computable outpatient pharmacy and drug allergy data were being exchanged through the CHDR initiative by about 2,700 (from about 18,300 to over 21,000). For the BHIE initiative, the departments continued to expand their information exchange by sharing viewable patient vital signs information in June 2008, and demonstrated the capability to exchange family history, social history, other history, and questionnaires data in September 2008. Since we last reported, DOD and VA also have made progress toward adopting additional health data interoperability standards that are newly recognized and accepted by the Secretary of HHS. The departments have identified these new standards, which relate to three use cases in the updated September 2008 Target Standards Profile. Specifically, the profile now includes Electronic Health Records Laboratory Results Reporting, Biosurveillance, and Consumer Empowerment use cases. According to DOD and VA officials, the adoption of recognized standards is a goal of both departments in order to comply with the provisions set forth in the National Defense Authorization Act for Fiscal Year 2008. In addition, DOD has reported progress toward certification of its health IT system in adhering to applicable standards. Department officials stated that AHLTA version 3.3 is now fully operational and certified at five DOD locations, having met certification criteria, including specific functionality, interoperability, and security requirements. According to DOD officials, this version of AHLTA is expected to be installed at the remaining locations by September 30, 2009. DOD and VA have also reported progress relative to two plans that contain objectives, initiatives, and activities related to further increasing health information sharing. Specifically, the departments have identified the November 2007 VA/DOD Joint Executive Council Strategic Plan for Fiscal Years 2008-2010 (known as the VA/DOD Joint Strategic Plan) and the September 2008 DOD/VA Information Interoperability Plan (Version 1.0) as defining their efforts to provide interoperable health records. The Joint Strategic Plan identified 39 activities related to information sharing that the departments planned to complete by September 30, 2008. The Information Interoperability Plan describes six objectives to be met by September 30, 2009. The departments reported that the 39 information sharing activities identified in the Joint Strategic Plan were completed on or ahead of schedule. For example, the departments completed a report on the analysis of alternatives and recommendations for the development of the joint inpatient electronic health record, and briefed the recommendations to the Health Executive Council and the Joint Executive Council. However, only 3 of the 39 activities in the Joint Strategic Plan were described in results-oriented (i.e., objective, quantifiable, and measurable) terms that are characteristic of effective planning and can be used as a basis to track and measure progress toward the delivery of new interoperable capabilities. For example, among these three, one of the activities called for the departments to share viewable vital signs data in real-time and bidirectional for shared patients among all sites by June 30, 2008. In contrast, 36 activities lacked results-oriented performance measures, limiting the extent to which progress can be reported in terms of results achieved. For example, one activity calls for the development of a plan for interagency sharing of essential health images, but does not provide details on measurable achievement of additional interoperable capabilities. Another activity calls for the review of national health IT standards, but does not provide a tangible deliverable to determine progress in achieving the goal. According to department officials, DOD and VA have activities underway to address the six interoperability objectives included in the Information Interoperability Plan. Among these objectives, one calls for DOD to deploy its inpatient solution at additional medical sites to expand sharing of inpatient discharge summaries. Department officials indicated that, as of December 2008, DOD is sharing patient discharge summaries at 50 percent of inpatient beds compared to their goal of 70 percent by September 30, 2009. However, this is the only one of six objectives in the Information Interoperability Plan with an associated results-oriented performance measure. None of the remaining five objectives are documented in terms that could allow the departments to measure and report their progress toward delivering new capabilities. Specifically, the objective for scanning medical documents calls for providing an initial capability. However, “initial capability” is not defined in quantifiable terms. As such, this objective cannot be used as a basis to effectively measure results-oriented performance. According to DOD and VA officials, their plans are relatively new and represent their initial efforts to articulate interoperability goals. However, while the departments’ plans identify interoperable capabilities to be implemented, the plans do not establish the results-oriented (i.e., objective, quantifiable, and measurable) goals and associated performance measures that are a necessary basis for effective management. Without establishing plans that include results-oriented goals, then reporting progress using measures relative to the plans, the departments and their stakeholders do not have the comprehensive information that they need to effectively manage their progress toward achieving increased interoperability. The National Defense Authorization Act for Fiscal Year 2008 called for the establishment of an interagency program office and for the office to be accountable for implementing electronic health record systems or capabilities that allow for full interoperability of personal health care information between DOD and VA. Since we last reported, the departments have continued taking steps to set up the program office, although they have not yet fully executed their plan for doing so. As a result, the office is not yet in a position to be accountable for accelerating the departments’ efforts to achieve interoperability by the September 30, 2009 deadline. To address the requirements set forth in the Act, the departments identified in the September 2008 DOD/VA Information Interoperability Plan a schedule for standing up the interagency program office. Consistent with the plan, the departments have taken steps, such as developing descriptions for key positions, including those of the Director and Deputy Director. Further, the departments have begun to hire personnel for program staff positions. Specifically, out of 30 total program office positions, they have hired staff for 2 of 14 government positions, 6 of 16 contractor positions, and have actions underway to fill the remaining 22 positions. Also, since we reported in July 2008, the departments developed the program office organization structure document that depicts the program office’s organization. Further, in December 2008, DOD issued a Delegation of Authority Memorandum, signed by the Deputy Secretary of Defense that formally recognizes the program office. In January 2009, the departments approved a program office charter to describe, among other things, the mission and function of the office. Nonetheless, even with the actions taken, four of eight selected key activities that the departments identified in their plan to set up the program office remain incomplete, including filling the remaining 22 positions, in addition to those of the Director and Deputy Director (as shown in table 1). DOD and VA officials stated that the reason the departments have not completed the execution of their plan to fully set up an interagency program office is the longer than anticipated time needed to obtain approval from multiple DOD and VA offices for key program office documentation (for example, the delegation of authority memorandum and charter). They stated that this was because the departments’ leadership broadened the program office’s scope to include the sharing of personnel and benefits data in addition to health information. Our July 2008 report recommended that the departments give priority to establishing the program office by establishing permanent leadership and hiring staff. Without completion of these and other key activities to set up the program office, the office is not yet positioned to be fully functional, or accountable, for fulfilling the departments’ interoperability plans. Coupled with the lack of results-oriented plans that establish program commitments in measurable terms, the absence of a fully operational interagency program office leaves DOD and VA without a clearly established approach for ensuring that their actions will achieve the desired purpose of the Act. In the more than 10 years since DOD and VA began collaborating to electronically share health information, the two departments have increased interoperability. Nevertheless, while the departments continue to make progress, the manner in which they report progress—by reporting increases in interoperability over time—has limitations. These limitations are rooted in the departments’ plans, which identify interoperable capabilities to be implemented, but lack the results-oriented (i.e., objective, quantifiable, and measurable) goals and associated performance measures that are a necessary basis for effective management. Without establishing results-oriented goals, then reporting progress using measures relative to the established goals, the departments and their stakeholders do not have the comprehensive picture that they need to effectively manage their progress toward achieving increased interoperability. Further constraining the departments’ management effectiveness is their slow pace in addressing our July 2008 recommendation related to setting up the interagency program office that Congress called for to function as a single point of accountability in the development and implementation of electronic health record capabilities. To better ensure that DOD and VA achieve interoperable electronic health record systems or capabilities, we recommend that the Secretaries of Defense and Veterans Affairs take the following two actions: Develop results-oriented (i.e., objective, quantifiable, and measurable) goals and associated performance measures for the departments’ interoperability objectives and document these goals and measures in their interoperability plans. Use results-oriented performance goals and measures as the basis for future assessments and reporting of interoperability progress. In providing written comments on a draft of this report in a January 22, 2009 letter, the Assistant Secretary of Defense for Health Affairs concurred with our recommendations. In a January 17, 2009 letter, the Secretary of Veterans Affairs also concurred with our recommendations. (The departments’ comments are reproduced in app. II and app. III, respectively.) DOD and VA stated that high priority will be given to the establishment and use of results- oriented (i.e., objective, quantifiable, and measurable) goals and associated performance measures for the departments’ interoperability objectives. If the recommendations are properly implemented, they should better position DOD and VA to effectively measure and report progress in achieving full interoperability. The departments also provided technical comments on the draft report, which we incorporated as appropriate. We are sending copies of this report to the Secretaries of Defense and Veterans Affairs, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-6304 or melvinv@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. To evaluate the Department of Defense’s (DOD) and the Department of Veterans Affairs’ (VA) progress toward developing electronic health record systems or capabilities that allow for full interoperability of personal health care information, we reviewed our previous work on DOD and VA efforts to develop health information systems, interoperable health records, and interoperability standards to be implemented in federal health care programs. To describe the departments’ efforts to ensure that their health records comply with applicable interoperability standards, we analyzed information gathered from DOD and VA documentation and interviews pertaining to the interoperability standards that the two departments have agreed to for exchanging health information via their health care information systems. We reviewed documentation and interviewed agency officials from the Department of Health and Human Services’ Office of the National Coordinator for Health Information Technology to obtain information regarding the defined federal interoperability standards, implementation specifications, and certification criteria. Further, we interviewed responsible officials to obtain information regarding the steps taken by the departments to certify their electronic health record products. To evaluate DOD and VA plans toward developing electronic health record systems or capabilities, we obtained information from agency documentation and interviews with cognizant DOD and VA officials pertaining to the November 2007 VA/DOD Joint Executive Council Strategic Plan for Fiscal Years 2008-2010, and the September 2008 DOD/VA Information Interoperability Plan (Version 1.0) which together constitute the departments’ overall plans for achieving full interoperability of electronic health information. Additionally, we reviewed information gathered from agency documentation to identify interoperability objectives, milestones, and target dates. Further, we analyzed objectives and activities from their plans to determine if DOD and VA had established results-oriented performance measures that enable the departments to assess progress toward achieving increased sharing capabilities and functionality of their electronic health information systems. To determine whether the interagency program office is fully operational and positioned to function as a single point of accountability for developing and implementing electronic health records, we analyzed DOD and VA documentation, including the schedule for setting up the office identified in the DOD/VA Information Interoperability Plan. Additionally, we interviewed responsible officials to determine the departments’ progress to date in setting up the interagency program office. Further, we reviewed documentation and interviewed DOD and VA officials to determine the extent to which the departments have positioned the office to function as a single point of accountability for developing electronic health records. We conducted this performance audit at DOD sites and also the Department of Heath and Human Services’ Office of the National Coordinator for Health Information Technology in the greater Washington, D.C., metropolitan area from August 2008 through January 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, key contributions to this report were made by Mark Bird, Assistant Director; Neil Doherty; Rebecca LaPaze; J. Michael Resser; Kelly Shaw; and Eric Trout. Information Technology: DOD and VA Have Increased Their Sharing of Health Information, but Further Actions Are Needed. GAO-08-1158T. Washington, D.C.: September 24, 2008. Electronic Health Records: DOD and VA Have Increased Their Sharing of Health Information, but More Work Remains. GAO-08-954. Washington, D.C.: July 28, 2008. VA and DOD Health Care: Progress Made on Implementation of 2003 President’s Task Force Recommendations on Collaboration and Coordination, but More Remains to Be Done. GAO-08-495R. Washington, D.C.: April 30, 2008. Health Information Technology: HHS Is Pursuing Efforts to Advance Nationwide Implementation, but Has Not Yet Completed a National Strategy. GAO-08-499T. Washington, D.C.: February 14, 2008. Information Technology: VA and DOD Continue to Expand Sharing of Medical Information, but Still Lack Comprehensive Electronic Medical Records. GAO-08-207T. Washington, D.C.: October 24, 2007. Veterans Affairs: Progress Made in Centralizing Information Technology Management, but Challenges Persist. GAO-07-1246T. Washington, D.C.: September 19, 2007. Information Technology: VA and DOD Are Making Progress in Sharing Medical Information, but Remain Far from Having Comprehensive Electronic Medical Records. GAO-07-1108T. Washington, D.C.: July 18, 2007. Health Information Technology: Efforts Continue but Comprehensive Privacy Approach Needed for National Strategy. GAO-07-988T. Washington, D.C.: June 19, 2007. Information Technology: VA and DOD Are Making Progress in Sharing Medical Information, but Are Far from Comprehensive Electronic Medical Records. GAO-07-852T. Washington, D.C.: May 8, 2007. DOD and VA Outpatient Pharmacy Data: Computable Data Are Exchanged for Some Shared Patients, but Additional Steps Could Facilitate Exchanging These Data for All Shared Patients. GAO-07-554R. Washington, D.C.: April 30, 2007. Health Information Technology: Early Efforts Initiated but Comprehensive Privacy Approach Needed for National Strategy. GAO-07-400T. Washington, D.C.: February 1, 2007. Health Information Technology: Early Efforts Initiated, but Comprehensive Privacy Approach Needed for National Strategy. GAO-07-238. Washington, D.C.: January 10, 2007. Health Information Technology: HHS is Continuing Efforts to Define Its National Strategy. GAO-06-1071T. Washington, D.C.: September 1, 2006. Information Technology: VA and DOD Face Challenges in Completing Key Efforts. GAO-06-905T. Washington, D.C.: June 22, 2006. Health Information Technology: HHS Is Continuing Efforts to Define a National Strategy. GAO-06-346T. Washington, D.C.: March 15, 2006. Computer-Based Patient Records: VA and DOD Made Progress, but Much Work Remains to Fully Share Medical Information. GAO-05-1051T. Washington, D.C.: September 28, 2005. Health Information Technology: HHS Is Taking Steps to Develop a National Strategy. GAO-05-628. Washington, D.C.: May 27, 2005. Computer-Based Patient Records: VA and DOD Efforts to Exchange Health Data Could Benefit from Improved Planning and Project Management. GAO-04-687. Washington, D.C.: June 7, 2004. Computer-Based Patient Records: Improved Planning and Project Management Are Critical to Achieving Two-Way VA-DOD Health Data Exchange. GAO-04-811T. Washington, D.C.: May 19, 2004. Computer-Based Patient Records: Sound Planning and Project Management Are Needed to Achieve a Two-Way Exchange of VA and DOD Health Data. GAO-04-402T. Washington, D.C.: March 17, 2004. Computer-Based Patient Records: Short-Term Progress Made, but Much Work Remains to Achieve a Two-Way Data Exchange Between VA and DOD Health Systems. GAO-04-271T. Washington, D.C.: November 19, 2003. VA Information Technology: Management Making Important Progress in Addressing Key Challenges. GAO-02-1054T. Washington, D.C.: September 26, 2002. Veterans Affairs: Sustained Management Attention Is Key to Achieving Information Technology Results. GAO-02-703. Washington, D.C.: June 12, 2002. VA Information Technology: Progress Made, but Continued Management Attention Is Key to Achieving Results. GAO-02-369T. Washington, D.C.: March 13, 2002. VA and Defense Health Care: Military Medical Surveillance Policies in Place, but Implementation Challenges Remain. GAO-02-478T. Washington, D.C.: February 27, 2002. VA and Defense Health Care: Progress Made, but DOD Continues to Face Military Medical Surveillance System Challenges. GAO-02-377T. Washington, D.C.: January 24, 2002. VA and Defense Health Care: Progress and Challenges DOD Faces in Executing a Military Medical Surveillance System. GAO-02-173T. Washington, D.C.: October 16, 2001. Computer-Based Patient Records: Better Planning and Oversight by VA, DOD, and IHS Would Enhance Health Data Sharing. GAO-01-459. Washington, D.C.: April 30, 2001.
Under the National Defense Authorization Act for Fiscal Year 2008, the Department of Defense (DOD) and the Department of Veterans Affairs (VA) are required to accelerate the exchange of health information between the departments and to develop systems or capabilities that allow for interoperability (generally, the ability of systems to exchange data) and that are compliant with federal standards. The Act also established a joint interagency program office to function as a single point of accountability for the effort, which is to implement such systems or capabilities by September 30, 2009. Further, the Act required that GAO semi-annually report on the progress made in achieving these goals. For this second report, GAO evaluates the departments' progress and plans toward sharing electronic health information that comply with federal standards, and whether the interagency program office is positioned to function as a single point of accountability. To do so, GAO reviewed its past work, analyzed agency documentation, and conducted interviews. DOD and VA continue to increase health information sharing through ongoing initiatives and related activities. Specifically, the departments' are now exchanging pharmacy and drug allergy data on over 21,000 shared patients, an increase of about 2,700 patients between June and October 2008. Further, they recently expanded the number of standards and specifications with which they expect their interoperability initiatives will comply. In addition, DOD reported that it received certification of its electronic health record system. Also, the departments have defined their plans to further increase their sharing of electronic health information. In particular, they have identified the Joint Executive Council Strategic Plan and the DOD/VA Information Interoperability Plan as the key documents defining their planned efforts to provide interoperable health records. These plans identify various objectives and activities that, according to the departments, are aimed at increasing health information sharing and achieving full interoperability, as required by the National Defense Authorization Act for Fiscal Year 2008. However, neither plan identifies results-oriented (i.e., objective, quantifiable, and measurable) performance goals and measures that are characteristic of effective planning and can be used as a basis to track and assess progress toward the delivery of new interoperable capabilities. In the absence of results-oriented goals and performance measures, the departments are not positioned to adequately assess progress toward increasing interoperability. Instead, DOD and VA are limited to assessing progress in terms of activities completed and increases in data exchanged (e.g., the number of patients for which certain types of data are exchanged). The departments have continued to take steps to set up the interagency program office. For example, they have developed descriptions for key positions and agreed with GAO's July 2008 recommendation that they give priority to establishing permanent leadership and hiring staff. Also, the departments developed the program office organization structure document that depicts the office's organization and, in January 2009, the departments approved a program office charter to describe, among other things, the mission and function of the office. Nonetheless, DOD and VA have not yet fully executed their plan to set up the program office. For example, among other activities, they have not yet filled key positions for the Director and Deputy Director, or 22 of 30 other positions identified for the office. In the continued absence of a fully established program office, the departments will remain ineffectively positioned to assure that interoperable electronic health records and capabilities are achieved by the required date.
According to Navy guidance, the Navy is required to project power from the sea and maintain assured access in the littoral regions, which for naval vessels refers specifically to the transition between open ocean to more constrictive shallower waters close to shore—the littorals. “Anti-access” threats from mines, submarines, and surface forces threaten the Navy’s ability to assure access to the littorals. The LCS is being developed to address these missions. The LCS design concept consists of two distinct parts, the ship itself and the mission package it carries and deploys. For LCS, the ship is referred to as the “seaframe” and consists of the hull, command and control systems, launch and recovery systems, and certain core systems like the radar and gun. A core crew will be responsible for the seaframe’s basic functions. Operating with these systems alone offers some capability to perform general or inherent missions, such as support of special operations forces or maritime intercept operations. The LCS’s focused missions are mine warfare, antisubmarine warfare, and surface warfare. The majority of the capabilities for these missions will come from mission packages. These packages are intended to be modular in that they will be interchangeable on the seaframe. Each mission package consists of systems made up of manned and unmanned vehicles and the subsystems these vehicles use in their missions. Additional crew will be needed to operate these systems. Each mission package is envisioned as being self contained and interchangeable, allowing tailoring of LCS to meet specific threats. Table 1 shows examples of LCS’s focused and inherent missions. The Navy characterizes the schedule for acquisition and deployment of LCS as aggressive. To meet this schedule, the Navy is pursuing an evolutionary acquisition strategy. Rather than initially delivering a full capability, the program is structured to deliver incremental capabilities to the warfighter. To support this, LCS acquisition is broken into “flights” for the seaframe and “spirals” for mission packages in order to develop improvements while fielding technologies as they become available. The initial flight of ships, referred to as Flight 0, will serve two main purposes: provide a limited operational capability and provide input to the Flight 1 design through experimentation with operations and mission packages. Flight 1 will provide more complete capabilities but is not intended to serve as the sole design for the more than 50 LCS the Navy plans to ultimately buy. Further flights will likely round out these numbers. Flight 0 will consist of four ships of two different designs and will be procured in parallel with the first increment of mission packages—Spiral Alpha. Flight 0 ships are currently being designed, and construction on the first ship will begin in 2005. Due to the accelerated schedule, Spiral Alpha will consist primarily of existing technologies and systems. Spiral Bravo mission packages will be improvements upon these systems and are intended to be introduced with the Flight 1 ships. Figure 1 shows the two designs chosen by the Navy for Flight 0, one by Lockheed Martin and one by General Dynamics. The Navy and Lockheed Martin signed a contract for detailed design and construction of the first Flight 0 ship in December 2004, and the ship builder is expected to deliver the ship to the Navy in fiscal year 2007. The Navy will then begin testing and experimenting with the ship, using the first mission package—mine warfare. A date for any deployment with the fleet has not been determined. Detailed design and construction for the first General Dynamics design ship is scheduled to begin in fiscal year 2006 and delivery is scheduled for fiscal year 2008. The delivery of the first antisubmarine and surface warfare mission packages are aligned with the delivery of the second Flight 0 ship. Figure 2 shows the Navy’s current acquisition timeline for Flight 0, Flight 1, and their mission packages. The development of Flight 1 will proceed concurrently with the design and construction of Flight 0. In early fiscal year 2006 the Navy will begin consideration of several preliminary designs for Flight 1. The Navy will choose designs for further development in fiscal year 2007. Selection of a design to start construction of the first Flight 1 ship will be in early fiscal year 2008. Flight 1 and future follow-on designs will be the basis for the LCS class of ships, which the Navy currently estimates could number between 50 and 60. Under the current acquisition strategy, detailed design and construction of the first Flight 1 ship will begin about 12 months after delivery of the first Flight 0 ship. The last two Flight 0 ships will not be available before detailed design and construction of Flight 1 begins. The second Flight 0 ship and the first mission packages for antisubmarine and surface warfare will be delivered just as detailed design and construction of Flight 1 is set to begin. Delivery of the first mission packages in Spiral Bravo will be aligned with delivery of the first Flight 1 ship. Recognizing that it lacks a number of key warfighting capabilities to operate in the littorals, the Navy began to develop the concept of LCS as a potential weapon system before it had completed formal requirements. Normally, a major acquisition program should include an examination of basic requirements and an analysis of potential solutions before a new system is decided upon. The Navy eventually conducted a requirements development process and analyzed a number of alternative solutions to a new ship but concluded that the LCS remained the best option. However, the Navy’s analysis of one area of littoral operations—the surface threats facing U.S. forces in littoral waters—did not include consideration of the potential impact of all threats the LCS is likely to face. The Navy has known about the capability gaps in the littorals for some time, particularly threats from mines and submarines in shallow waters. As we previously reported, the Navy has acknowledged that it lacks a number of key warfighting capabilities it needs for operations in the littoral environs. For example, it does not have a means for effectively breaching enemy sea mines in the surf zone or detecting and neutralizing enemy submarines in shallow water. The Navy has had programs under way to improve its capabilities in each of these areas for many years, such as systems designed to provide the fleet with mine detection and limited clearing capabilities, but progress has been slow. Additionally, the Navy has identified the threat of small boats, such as the kind that attacked the U.S.S. Cole in 2000, as a potential hindrance to operations in the littorals. The Navy has decided that the LCS is to accomplish these three critical littoral missions. After recognizing the need to address known capability gaps in the littorals, the Navy conducted a series of wargames to test new concepts for surface combatant ships. One such concept, a very small surface combatant ship called Streetfighter, was incorporated into the Global 1999 war game. The concept was envisaged as a small, fast, stealthy, and reconfigurable ship, which included many characteristics similar to LCS. The Navy’s war-fighting assessment processes confirmed gaps in capabilities for mine warfare, shallow water antisubmarine warfare, and surface warfare against small boats. In July 2001, the Global 2001 war game further examined the concepts and potential benefits of modularity—such as using mission packages—and use of unmanned vehicles for littoral missions. As a result of the wargames the Navy continued the process of analyzing a variety of new surface combatant ship concepts to address the threats in the littorals. In 2002, the Navy established an LCS program office as it began to further identify concepts and characteristics for a new surface combatant ship. In December 2001, the Naval War College was asked to develop and define characteristics that would be desirable in a littoral combat ship. The college used a series of workshops that included operational and technical experts from throughout the Navy to compare three types and sizes of surface combatant ships and describe desirable characteristics that such a ship should have. The experts examined such characteristics as speed, range, manning, and the ability to operate helicopters and unmanned vehicles. The workshop participants also concluded that a potential littoral ship should be capable of networking with other platforms and sensors, be useful across the spectrum of conflict, be able to contribute to sustained forward naval presence, be capable of operating manned vertical lift aircraft, be capable of operating with optimized manning, have an open architecture and modularity, be capable of operating manned and unmanned vehicles, and have organic self defense capabilities. The results of the Naval War College study, which was completed in July 2002, were used as a baseline for further developing the concepts for LCS. At this point the Navy’s analysis was focused on a single solution to address littoral capability gaps—a new warship along the lines of LCS. Between April 2002 and January 2004, the Navy conducted an analysis of multiple concepts to further define the concept that would address gaps in the littorals. The analysis began by examining five different ship concepts for LCS (later focusing on three concepts for another stage) and provided the Navy with insight into the trade-offs between features such as size, speed, endurance, and self defense needs. The analysis was performed by the Naval Surface Warfare Center, Dahlgren Division, and drew upon expertise throughout the Navy. The Office of the Secretary of Defense and the Joint Staff were concerned that the Navy’s focus on a single solution did not adequately consider other ways to address littoral capability gaps. Based on these concerns, in early 2004, the Navy was required to more fully consider other potential solutions. The publication of new guidance on joint capabilities development in June 2003, also led the Navy to expand its analysis beyond the single solution of the proposed new ship to include other potential solutions to littoral challenges. As part of its resulting analysis, the Navy defined littoral capability gaps, developed requirements to address those gaps, and identified and examined 11 nonmateriel and 3 materiel solutions across the joint forces that could be used to mitigate gaps in the littorals. Nonmateriel solutions refer to the use of different operational concepts or methods to meet requirements without buying new assets such as additional ships; materiel solutions are those which involve developing equipment or systems, such as ships and aircraft. The solutions were analyzed to determine the feasibility and risk in mitigating the gaps. The Navy’s assessment of feasibility centered on the extent to which each solution addressed the mine, antisubmarine, and surface capability gaps. The Navy’s assessment of risk centered on the impacts of each solution on (1) the success of potential operations in the littorals, (2) the sensitivity of diplomatic considerations, such as the military support of other nations, and (3) the financial considerations involved in choosing that solution. Two additional materiel solutions, that centered on maritime patrol aircraft and modified DDG-51 destroyers, were added to the Navy’s analysis as a result of input from the Office of the Secretary of Defense’s Program Analysis and Evaluation office and the Acquisition, Technology and Logistics office. The Office of the Secretary of Defense and the Joint Staff also provided specific questions to the Navy for further clarification of the Navy’s ongoing analysis. With these additions, the Program Analysis and Evaluation office approved the Navy’s completed analysis as satisfactory to meet the requirements of a full analysis of alternatives for the LCS program. Table 2 shows the materiel and nonmateriel solutions presented in the Navy’s requirements analysis and the results of the Navy’s analysis of operational feasibility, as well as operational, diplomatic, and financial risk. Based on its analysis, the Navy concluded that the materiel and nonmateriel solutions they examined would not provide better operational and cost effective solutions than the proposed LCS to perform the littoral missions. A number of factors were analyzed, including the feasibility of using other surface and non-surface force solutions and the risk associated with those options. Four nonmateriel solutions were considered to be partially feasible for mitigating the gaps in the littorals, while seven other solutions were considered not to be feasible. Partially feasible nonmateriel solutions included the use of maritime patrol aircraft, submarines, and a mix of air and sea assets from carrier and expeditionary strike groups. The most feasible solution considered using a combination of existing forces from carrier and expeditionary strike groups. However, the Navy determined that during a major combat operation, this solution would not be feasible because other mission objectives focused on directing operations onto shore would take a higher priority. Some of the materiel solutions included expanding existing forces, upgrading existing forces, or procuring a new class of platforms tailored for focused missions. Using a number of studies of threats and analyses of potential military operations in the littoral regions, the Navy developed requirements for the LCS that addressed the identified capability gaps and likely threats in the littorals. This analysis supported revised DOD and Joint Chiefs of Staff requirements for shipbuilding acquisition programs. The Navy identified capability gaps in the littorals by measuring the ability of the current and programmed joint forces to accomplish a number of tasks across a range of operating conditions and standards. The Navy concluded that based on completing the tasks in the littorals under the established measures of effectiveness, it lacked sufficient assets and technology to fully mitigate the gaps. For example, under mine warfare the task for clearing routes for transit lanes covering a specific area within a 7 day period creates a capability gap because the Navy concluded that its force structure lacked the number of assets (mine countermeasures ships, destroyers with remote mine-hunting systems, and the appropriate mine countermeasures helicopters) to fully mitigate the gap in the littorals under the operational timeline of seven days. Table 3 shows examples of tasks for each focused mission, the measures of effectiveness, and the capability gap that exists under the current and programmed force structure. We analyzed the requirements the Navy developed to address littoral capability gaps and used to support the LCS program, tracking each requirement in the mine, antisubmarine, and surface warfare areas back to the capability gaps and threats identified by the Navy in their requirements development process. We found no inconsistencies in the specific requirements for LCS illustrated in the documents required as part of the joint capabilities integration and development system. However, the requirements the Navy arrived at for LCS’s surface warfare capabilities were focused on small boats, and this did not include an analysis of the impact of larger surface threats in the littorals. The Navy focused the surface threat on swarms of small boats, characterized as Boston Whalers, capable of operating at high speeds and employing shoulder mounted or crew served weapons, such as light machine guns. These boats can conduct surprise, simultaneous, short range attacks from or near shorelines. The Navy measured its current and programmed capabilities against defeating swarms of small boats in high numbers. For example, to determine the capability gaps and measures of effectiveness for escorting ships through choke points, the Navy measured its force structure against defeating large numbers of small boats. However, larger threats, such as missile-armed patrol boats and frigates, are also identified in the Navy’s LCS concept of operations and threat studies as threats that LCS may face in the littorals. Such vessels may be armed with medium caliber guns, torpedoes, and antiship missiles. These threats could present additional risk to LCS operations. Some DOD and Navy officials have raised concerns about the extent to which the LCS may face larger threats than it is capable of defending against. Navy officials agreed that the surface threat was focused exclusively on swarms of small boats and told us that LCS is not intended to combat larger threats. The Navy found no capability gap with respect to the larger surface threat, because there is sufficient capability in the existing fleet to counter the threat. Further, Navy officials stated that if a larger surface threat were encountered, LCS would be able to call upon the assistance of other U.S. forces in the area, such as tactical aviation or larger surface warships. In a major combat operation, LCS squadrons would be able to draw upon assistance of those nearby Navy or joint forces in the face of a larger surface threat in the area. However, according to the LCS concept of operations, in addition to operating with other U.S forces on a regular basis, LCS is intended to operate independently of those forces, depending on the type of mission and circumstance. When operating independently, such as during routine deployments to littoral waters, LCS may not be able to call upon assistance from larger U.S. forces. This may impede LCS operations, such as forcing the LCS to withdraw from an operating area, a situation contrary to the Navy’s goals. Since the Navy did not analyze the impact of larger surface threats on LCS operations, the extent of the risk and the impact on U.S. operations is not known. Although there are no formal criteria for developing a concept of operations, the Navy has developed both a broad concept and more detailed plans as to how the LCS and its mission systems will be used to meet requirements. The concept of operations also includes several challenges that, if not met, may increase the risk in actual LCS operations. However, the Navy has not yet fully considered the LCS concept of operations in the force structure and procurement plans for the MH-60 helicopter, which is critical to all LCS missions. The Navy has recognized these risks and is attempting to address them. However, if these efforts are not successful within the time constraints of the schedule, the Flight 0 ships may not provide the planned capability or the level of experimentation needed to inform the Flight 1 design. The Navy has developed a broad concept of operations document for LCS. Though there are no formal guidelines that describe how the concept of operations should be written or the level of detail it should contain, it is a high level requirements document that describes how the user (in this case, the Navy) will use the weapon system to address mission needs. The concept of operations can also be used as guidance in developing testable system and software requirements specifications. In particular, the LCS concept of operations describes how the ship will contribute to U.S. Joint Force operations in countering threats in the littorals. These include mine warfare (detecting and neutralizing mines), antisubmarine warfare (detecting and engaging hostile submarines), and surface warfare (detect, track, and engage surface threats). In addition to these focused missions, the LCS concept of operations discusses how the LCS can perform inherent missions, such as support for special operations forces, maritime interception operations and supporting homeland defense related missions. For example, the LCS concept of operations for maritime interception operations envisages use of the ship’s core crew, and any additional personnel in case of operations in higher threat areas, to provide boat crews and boarding teams to board suspect vessels as well as using an embarked helicopter for assistance. The concept of operations is directed at Flight 0 but also provides a vision for follow-on ships. The document has also been used to build consensus among warfighters, the acquisition community, and the various industry teams involved in building LCS as to how the ship is intended to be used. The development process for the LCS concept of operations began with the Navy Warfare Development Command in late 2002 when it created the first version of the Concept. The document described the projected threat context, capabilities, and operational employment of LCS to help industry with their designs. The Command based this version of the concept of operations on their experience with various pre-LCS studies and war games that employed fast, small ships with modular payloads. The Navy subsequently updated and expanded the concept of operations with new information that related to critical areas that impact, and are impacted by, LCS operations, including doctrine, training, and personnel. The Navy approved the LCS concept of operations in December 2004. The Navy is also continuing to refine concepts for how LCS and its mission systems will be used to address anti-access threats. These efforts include a Concept of Employment, which describes the way mission package systems are intended to be used to meet warfare requirements, and an analysis of performance data for individual systems in order to inform experiments on the actual operation of LCS mission systems. In addition, the Navy will incorporate lessons learned from Flight 0 operations into future versions of the LCS concept of operations. We compared the LCS concept of operations to the approved requirements for the ship and the capability gaps identified by the Navy and found that each of the capability gaps and LCS mission requirements were addressed in the concept of operations. For example, the requirements to address the mine warfare capability gap call for mines to be detected, identified, and neutralized. The concept of operations discusses how the LCS will address these requirements by using a combination of helicopters and unmanned vehicles to detect and identify mines, and either a helicopter or an explosive ordinance disposal detachment with unmanned underwater vehicles to neutralize mines. The LCS concept of operations includes several operational and logistical challenges that may increase the operational risk for LCS. One challenge is to reduce the numbers of sailors required to operate the ship’s critical mission systems. This challenge is exacerbated by the limited space on the ship. If this cannot be achieved, the Navy may have to make significant changes to the design or capability of follow-on ships. Another challenge is the logistics support required to meet the Navy’s goal of changing LCS mission packages within 4 days of arriving at an appropriate facility. A number of factors frame this challenge, including where packages are to be stored, how they are to be transported, and the proximity of LCS operating areas to ports required to swap mission packages. Any of these factors could increase the time required for a change in LCS mission packages once the decision has been made to do so. Other challenges include training; command, control, communications, computers, and intelligence; survivability; and the impact on the Navy’s force structure. The two versions of the MH-60 helicopter intended for use with LCS embody a number of these challenges. The helicopter is vital to each of the LCS’s focused missions as well as some of the ship’s inherent missions, such as maritime intercept operations. In order to operate a helicopter from LCS, a detachment of flight and maintenance personnel are required. The Navy’s current helicopter detachments on surface warships each number at least 20 people. When combined with the ship’s core mission crew, this number could exceed the capacity of LCS to house crews, thereby limiting the ability of LCS to operate other mission package systems and reducing the ship’s operational effectiveness. Additionally, the Navy’s plans for buying and fielding MH-60s do not yet include the quantities needed for the numbers of follow-on LCS ships the Navy intends to buy. Since the helicopter is critical for LCS’s concept of operations, the ship’s operations will be significantly limited if the helicopters are not bought and made available. To do this, the Navy needs to plan for the numbers of helicopters needed, modify its procurement plans, obtain the funds, build the helicopters, deliver them, conduct operational evaluations, and train the crews. The Navy recognizes these risk areas and has mitigation efforts underway in each area. For example, in the risk area of manning reduction, the Navy is using the “Sea Warrior” program to cross train sailors so that they are more able to multitask and perform a wider set of duties. The Navy is also conducting additional analysis to validate the maximum number of crewmembers needed and will make changes to crew accommodations if necessary. Further, the Navy is analyzing ways to reduce the size of helicopter detachments and is currently reevaluating its helicopter force structure and procurement plans to provide the MH-60s needed for LCS. In addition, the Navy has established an LCS risk management board to track and manage each of the risk areas as well as monitor the effectiveness of risk mitigation efforts. Table 4 lists the challenges for LCS and examples of Navy mitigation efforts. None of these challenges are insurmountable, given enough time and other resources to address them. However, if the Navy is unsuccessful in mitigating the risk areas by the time the first Flight 0 ships are delivered, LCS may be unable to meet even the limited mission capability planned for Flight 0. The Navy plans for a period of about 12 months between the time of delivery of the first Flight 0 ship and the start of construction for the first Flight 1 ship, provided the first Flight 0 ship is available on time. Further, only one mission package (mine warfare) will be available for testing and experimentation during that time. The last two Flight 0 ships will not be available before detailed design and construction of Flight 1 begins. The second Flight 0 ship and the first mission packages for antisubmarine and surface warfare will be delivered just before detailed design and construction of Flight 1 begins. Delays caused by any of the risk areas discussed above might further reduce the already limited time to adequately experiment with one Flight 0 ship in order to integrate lessons learned into planning and designing for Flight 1. A number of the technologies chosen for the LCS mission packages are not mature, increasing the risk that the first ships will be of limited utility and not allow sufficient time for experimentation to influence design for follow-on ships. Our work has shown that when key technologies are immature at the start of development, programs are at higher risk of being unable to deliver on schedule and within estimated costs. The remaining technologies are mature although some may require alterations to operate from LCS. Other issues beyond technology maturity could prevent some systems from being available in time for the first ship. Some technologies still in development face challenges going to production, while other mature technologies may not be available for LCS due to other Navy priorities. Challenges remain for technologies included on the LCS seaframe, including those for communications, software, launch and recovery, and command and control of off-board systems. As a result, the first Flight 0 ships may not be able to provide even the limited amount of mission capability envisaged for them. These factors could also impair the Navy’s ability to experiment with the Flight 0 ships and adequately gather and incorporate lessons learned into the designs for the Flight 1 ships. In order to perform its focused missions of finding and neutralizing mines, submarines, and small boats in the littorals, LCS will deploy mission packages consisting of helicopters and unmanned vehicles with a variety of sensors and weapons. Each of the interchangeable mission packages is tailored to a specific mission and is optimized for operations in the littorals. By using a mix of manned and unmanned vehicles, program officials hope to increase the areas covered and decrease the time required by existing systems. The use of multiple mission packages is to be enabled by the design of the ship itself which will use a number of common connections or interfaces that will work regardless of the individual technologies or systems used in the mission packages. In order to speed the development of the first LCS, the Navy planned for the mission packages to comprise technologies that are either already demonstrated in an operational environment and used by the Navy, and therefore fully mature, or very close to the end of the development cycle and near full maturity. However, in some cases the program office chose technologies that have not completed testing and are not considered mature. Some of these technologies will be delivered to LCS as prototypes or engineering development models and may not be fully mature. The program office has used an informed process in choosing which technologies to pursue for Flight 0, tracking the maturity of technologies and the plans for further development. Those technologies selected by the program that lack maturity are being monitored and decisions about their inclusion are made based on results of further testing. Once initial choices were made, the Navy used an independent panel of experts, consisting of Navy and industry technology experts, to reassess the maturity of technologies and the efforts needed for risk reduction. The assessment paid particular attention to technologies at low levels of readiness, such as the Non-Line-Of-Sight missile launching system (also referred to as NetFires) and the environment in which the technologies are to be used. The first mission package to be developed will focus on mine warfare and will align with the delivery of the first ship in January 2007. The systems within this mission package contain both mature and immature technologies, although some mature technologies, like the remote mine- hunting vehicle, may need modifications to operate from LCS. Table 5 shows the maturity and availability of mission package technologies for mine warfare, based on the Navy’s current assessment. The first mission package is intended to be delivered with the first Flight 0 ship in fiscal year 2007. A number of critical mine warfare systems are not mature or will not be ready due to the unavailability or immaturity of subsystems. This could have a negative effect on LCS as the loss of certain technologies leads to a decrease in capabilities. The MH-60S helicopter is a key system for mine warfare employing technologies for both the detection and the neutralization of mines in shallow water. While the helicopter has proven its ability to detect mines, two of the technologies for neutralization lack maturity. Testing on neutralization technologies continues but is not expected to complete until after delivery of the first ship, limiting the ability of LCS to destroy sea based mines. One system which could fill the gap in this area, the unmanned surface vehicle, also lacks maturity in key systems and ultimately may not be available. The first systems for antisubmarine and surface warfare packages of Spiral Alpha are scheduled to be available at the time the second Flight 0 ship is delivered in fiscal year 2008. Of these technologies, few are currently mature. Two of the systems used for detecting submarines, the unmanned surface vehicle and remote mine-hunting vehicle, lack maturity in key subsystems and will be delivered to LCS while still experimental. If these systems fail to meet requirements, LCS may have to depend on the MH- 60R helicopter to find submarines. The MH-60R is an important system in both these missions, and while fully mature in the antisubmarine warfare configuration, it has not yet completed testing for surface warfare and is not expected to do so until September 2005. The helicopter has potential capability in both detecting and neutralizing surface targets, such as small boats, due to the types of sensors and weapons it carries. Tables 6 and 7 show the maturity and availability of mission package technologies for antisubmarine and surface warfare, respectively. These packages are scheduled to be delivered with the second Flight 0 ship in fiscal year 2008. In addition to challenges posed by the lack of mature technologies, there may be other challenges in obtaining some mission package systems in time for the first ships. The unmanned surface vehicle, a system used in all three mission packages, is being developed through an advanced concept technology demonstration and does not yet have a planned production schedule. The current development program for the unmanned surface vessel ends in fiscal year 2005 and seeks only to prove the military utility of the vehicle. In order to procure the systems needed for LCS, a new program will have to be established to conclude development, finalize design and start production of vehicles. Other technologies have planned production schedules but need to complete significant demonstrations and tests before they are able to deploy operationally. The vertical takeoff unmanned aerial vehicle, another system used in all mission packages, underwent a major redesign, and the first deliveries to LCS will not represent a final design. The remote mine-hunting vehicle only recently began development as an antisubmarine warfare platform and remains in development as an advanced concept technology demonstration. These factors could jeopardize the dates established for the delivery of the LCS mission packages and may ultimately affect the ability of LCS to execute many of the missions assigned to it. Other technologies, while mature, may not be available to LCS in time for the ship’s deployment due to other Navy priorities. For example, the MH- 60 helicopters, in both the MH-60R and MH-60S configurations, are scheduled to complete testing in fiscal year 2007, but may not be fully available until fiscal year 2009, assuming the Navy makes them available for LCS, because of training requirements. This could have an impact on LCS capabilities in all missions. The MH-60S is a key system for mine warfare, and the lack of this helicopter results in the loss of some capability, in terms of detecting some mines, and limitations in the ability to neutralize others. While LCS will still be capable of detecting and destroying mines in littorals without the helicopter, it will do so more slowly, which minimizes operational effectiveness. If the MH-60R is unavailable, the ability to neutralize submarines from LCS is severely compromised as no other mission package system is planned to provide a neutralization capacity. Older, less capable, versions of the MH-60 helicopter can be used in this mission but changes would be needed in the ship’s communications systems. The Navy acknowledges that no helicopters will be available for LCS operations until fiscal year 2009 and are working to align crew training schedules to permit operations with LCS. Challenges also remain for systems on the LCS seaframe, including technologies for communications, software, launch and recovery, and command and control of off-board systems. Further tests of these systems are expected before ship installation. In addition to limiting the operational capability of the Flight 0 ships, technology maturity and availability issues could limit the time available for the Navy to adequately experiment with operation of the seaframe and mission packages and gather valuable lessons for incorporation into Flight 1 ships. Detailed design and construction of the first Flight 1 ship is currently scheduled to begin in fiscal year 2008. Spiral Alpha mission packages for antisubmarine warfare and surface warfare are not scheduled for delivery to the Flight 0 ships until fiscal year 2008, just as detailed design and construction for Flight 1 is set to begin. If technology immaturity causes any of the mission packages systems to slip to later delivery dates, the opportunity to experiment and gather lessons learned from these systems aboard the Flight 0 ships would be lost, unless the time allowed for such experimentation is extended. If the helicopters are not available for operations until fiscal year 2009, input on the full impact of their operations could be lost as well. The cost to procure the first flight of LCS ships remains uncertain, particularly regarding the mission packages. The basis of the procurement costs for the LCS seaframe appears to be more defined because the Navy has conducted a series of cost analyses to investigate the challenges in detailed design and construction. The Navy seeks to stabilize seaframe costs by establishing a $220 million cost target and working to meet this target by trading between capability and cost while assuring that seaframe performance meets threshold requirements. Nevertheless, seaframe costs could be affected by changes to ship design and materials that might be necessary as a result of changes to naval ship standards. As many of the systems for the mission packages lack maturity, cost data for these technologies are not as firm. Other mission package costs are not covered by LCS program cost analyses. For programs like LCS, an independent cost estimate by the Office of the Secretary of Defense normally provides additional confidence in program cost estimates, but such an estimate will not be done on LCS until Flight 1. In addition to issues with procurement costs, nonrecurring development costs for the LCS could expand, as systems both in the mission packages and the seaframe remain in development. The Navy’s procurement cost target for Flight 0 is about $1.5 billion (fiscal year 2005 dollars). The cost target for each of the four Flight 0 ships is approximately $370 million. This includes $220 million for the seaframe and approximately $150 million for mission packages (the cost of six packages averaged over four ships). The Navy currently estimates that the mission packages for Flight 0 will cost approximately $548 million, which is approximately $137 million for the six packages averaged over four ships. This is about $13 million below the mission package target. Table 8 shows the current cost estimates for the mission packages for Flight 0. The estimated cost for seaframe detailed design and construction is considered competition sensitive and is not discussed in detail in this report. The Navy has conducted a number of cost reviews for procurement of the LCS seaframe and mission packages to support decision making at key points in the program. One of the most detailed of these reviews took the form of a cost assessment used to support the program’s initiation. In this assessment the program office analyzed cost data, provided by the contractor, to establish a preliminary cost and challenged some assumptions behind these costs. The Cost Analysis and Improvement Group of the Office of the Secretary of Defense also performed cost assessments for Flight 0. More recently, a cost estimate for procuring the seaframe and mission packages of Flight 0 was performed by the Navy and became the official program estimate. A cost estimate differs from an assessment in that it goes into greater depth in challenging assumptions behind costs provided by the contractors and may use different methodologies and assumptions to arrive at a final number. As a result, the program estimate may differ from the price provided by contractors and offers a more detailed cost analysis for decision making. The basis of the procurement costs for the LCS seaframe appears to have become more defined over time as successive cost analyses have been developed to anticipate the challenges in detailed design and construction. Analyses included recommendations to add funds to mitigate changes to seaframe design as well as firm fixed price quotes for some materials. In addition, the Navy seeks to manage seaframe costs by establishing a $150 to $220 million cost range, which the Navy considers aggressive, and has been working to meet this range by trading between capability and cost while assuring that seaframe performance meets requirements. Any capabilities in the seaframe that exceed the requirements established by the Navy are considered trade space areas, in which less expensive systems may be substituted at the cost of lower performance. Each trade is analyzed for impact to cost and operational capability by a team of program officials and is fully vetted through the chain of command. One factor that increases risk to seaframe cost estimates is applying the current changes in the naval vessel rules for design and construction of surface ships. The unconventional hull designs and materials used in both Flight 0 LCS designs reflect new types of ships the Navy has not hitherto built. Changes to the rules are occurring at the same time as development of the LCS. The process of meeting these rules could lead to changes in the designs and materials used. Such changes may increase uncertainty in seaframe procurement and life-cycle costs. The costs for the first spiral of mission packages are less defined, as many of the technologies are not mature. For example, the unmanned surface vehicle remains in an advanced concept technology demonstration program into fiscal year 2005. This program seeks only to prove the military utility of the vehicle. Any cost data that emerges as a result of tests and construction of test vehicles does not accurately represent the final cost of the system and is thereby preliminary. The vehicle may also use different subsystems or have different capabilities when used on LCS. This would further change actual procurement costs. Additional confidence in a program’s costs is usually gained through an independent cost estimate done outside the Navy. According to a DOD acquisition instruction, an independent cost estimate should be completed as part of the process that normally authorizes the lead ship, referred to as the Milestone B decision. For programs like LCS, an independent group, like the Cost Analysis and Improvement Group, is required to perform such an estimate. While this group performed assessments of Flight 0 costs, it has not yet performed a cost estimate for LCS. On the LCS program, the Flight 0 ships are considered to be predecessors to the Milestone B decision. The Milestone B decision will authorize the first Flight 1 ship. The Navy considers this to be the point at which an independent estimate is required. An independent cost estimate is thus planned for authorization of Flight 1 in January of 2007. While DOD would not have been prevented from conducting an independent estimate for Flight 0, given the short time in which the Navy solicited and selected designs for Flight 0, it is unclear whether there was enough time to do so. Other mission package costs are not covered by LCS program cost analyses but could have an effect on the broader Navy budget. For example, mission package costs do not include procurement costs for the MH-60R and MH-60S helicopters utilized in LCS operations. The Navy estimates that the procurement cost for each MH-60R is about $36 million and the cost for each MH-60S is about $23 million. The number of helicopters acquired by the Navy is determined by the helicopter concept of operations, which has not yet been modified to reflect the deployment of LCS. Given the reliance of LCS mission packages on these platforms, costs for these systems, or number needed for operations, could increase. The developmental nature of the mission package technologies may affect more than the procurement, or recurring, costs of LCS. Development and integration of technologies on many of the mission package systems is not complete. Testing for these systems will continue, in some cases, up to the delivery date of the mission packages. Should these tests not go as planned, or if more time and money is needed for integration and demonstration, development costs could rise. Since the development of mission package systems is only partially funded by LCS, the costs for continued development could spread to other programs. Alternately, the decision maybe made to reduce the quantities of certain technologies aboard LCS, as was the case with the Advanced Deployable system. Some seaframe technologies remain developmental as well, such as the launch and recovery systems. Unlike the mission packages, the LCS program office would assume any increase in development funding that occurs on seaframe systems. The Navy has embarked on a plan to construct four Flight 0 ships, complete development and procure multiple mission packages, experiment with the new ships, and commit to the construction of follow- on ships in a span of only four years. The Flight 1 and follow-on designs form the basis of a class of ships that may eventually total more than 50. At this point, we see three risks that could affect the success of the program. First, because the Navy focused the surface warfare threat and requirements analysis exclusively on small boat swarms, the risks posed by larger surface threats when the LCS operates independently from nearby supporting U.S. forces have not yet been assessed. Second is the availability of the MH-60 helicopter in light of its criticality to all LCS missions. Experimentation with the MH-60 will provide key information on mission performance, operations issues such as manning, and technology maturity. Thus, it is essential that the helicopters, equipped with the systems needed for LCS missions, be available for testing on the Flight 0 ships. In addition, if the quantities of MH-60s are not available for the Flight 1 ships the Navy’s ability to deploy these ships operationally as intended, would be reduced. Making the MH-60s available requires meeting a number of challenges, including developing requirements, force structure planning, budgeting, delivering, and training air crews. Third, the Navy intends to begin considering multiple designs for Flight 1 in fiscal year 2006 and to begin detailed design and construction of a single design in fiscal year 2008. By 2007, only one Flight 0 ship will be delivered, and only one mission package will be available, providing there are no delays for either ship or mission package. While maturing technologies and evaluating potential designs for Flight 1 while Flight 0 ships are being delivered could be beneficial, committing to a single design for follow-on ships before gaining the benefit of tests and experiments with the two Flight 0 designs increases the risk to the Flight 1 design. The current schedule allows about 12 months for the Navy to conduct operational experiments to evaluate the first Flight 0 seaframe design; the mine warfare mission package; and the doctrinal, logistics, technology maturity and other operational challenges the Navy has identified before committing to production of follow-on ships. The Navy’s schedule does not allow for operational experimentation with the other three ships or the antisubmarine or surface warfare mission packages before Flight 1 is begun. Setbacks in any of these areas further increases the risk that the Navy will not be able to sufficiently evaluate and experiment with Flight 0 ships and incorporate lessons learned into the design and construction of the Flight 1 ships. To help the Navy assess and mitigate operational, force structure, and technology risks associated with LCS, we are making the following three recommendations: To determine whether surface threats larger than small boats do pose risks to the LCS when operating independently and to mitigate any risks the Navy subsequently identifies, we recommend that the Secretary of Defense direct the Secretary of the Navy to conduct an analysis of the effect of a surface threat larger than small boats on LCS operations and the impact on other naval forces in support of those operations. To address challenges associated with integrating the MH-60 helicopter into LCS operations, we recommend that the Secretary of Defense direct that the Navy include in its ongoing evaluation of helicopter integration with LCS (1) evaluation of the numbers and budget impact of helicopters required to support future LCS ships and (2) examination of how to address manning, technology, and logistical challenges of operating the helicopters from LCS. To allow the Navy to take full advantage of the technical and operational maturation of the Flight 0 ships before committing to the much larger purchases of follow-on ships, we recommend that the Secretary of Defense direct the Navy to revise its acquisition strategy to ensure that it has sufficiently experimented with both Flight 0 ship designs, captured lessons learned from Flight 0 operations with more than one of the mission packages, and mitigated operational and technology risks before selection of the design for an award of a detailed design and construction contract for Flight 1 is authorized. In written comments on a draft of this report, DOD generally agreed with the intent of our recommendations. DOD discussed steps it is currently taking as well as actions it plans to take to address these recommendations. In response to our recommendation that the Navy analyze the effect of a larger surface threat on LCS operations, DOD indicated that, in addition to efforts it already has underway to analyze elements of the threats facing LCS, the Navy will assess the impact of larger surface threats on LCS as part of the capabilities development process for Flight 1. Using the analyses required in this process should help the Navy clarify the extent to which a larger surface threat poses a risk to LCS operations. In commenting on its plans to address helicopters’ needs and challenges, DOD indicated that it is currently assessing the helicopter force structure including both manned and unmanned aerial vehicles. While this may clarify the Navy’s helicopter force structure requirements, we continue to believe that due to the importance of helicopters to LCS operations and the numbers of LCS the Navy plans to acquire, the Navy should also analyze the budgetary impact of potential helicopter force structure changes. In response to our recommendation that the Navy revise its acquisition strategy to ensure time to experiment with Flight 0 designs, DOD stated that, before award of Flight 1 contracts, it will review the acquisition strategy to ensure the strategy adequately provides for experimentation, lessons learned, and risk mitigation. DOD stated that it is balancing the acquisition risks with the risk of delaying closure of warfighting gaps that LCS will fill. It also stated that mission package systems will potentially be spiraled with a different cycle time than the historically more stable hull and systems that comprise the seaframe. We believe the separation of development spirals for the mission packages and seaframe has merit. However, decisions leading to the award of a detailed design contract for the Flight 1 seaframe must go beyond technology risks. Because the Navy plans to begin design of the Flight 1 seaframe with a new development effort and competition, it is important to gain experience with the two Flight 0 seaframe designs that are being acquired so that the benefits of this experimentation can be realized in the design and development of a new seaframe. Experimentation with Flight 0 in terms of basic mission performance, swapping mission packages, actual manning demands, and operations with multiple LCS are all factors that could have a significant effect on the Flight 1 ship design. DOD also noted that its plan for acquiring LCS provides for multiple flights. Under this strategy, DOD would have more opportunities beyond the fiscal year 2008 Flight 1 decision to upgrade mission packages and seaframes as the 50 or so remaining ships are bought. We have made changes in the report to reflect this strategy. However, we do not believe it lessens the value of incorporating experience from Flight 0 operations into the design for Flight 1. DOD’s written comments are included in their entirety in appendix II. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of the Navy. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact Paul Francis at (202) 512-2811; or Karen Zuckerstein, Assistant Director, at (202) 512-6785. Key staff members that contributed to this report are listed in appendix III. To assess the basis of the LCS requirements and the concept of operations, we obtained and analyzed Navy wargames and operational plans, requirements documents, and other sources used by the Navy to identify capability gaps in the littoral waters. We conducted our own analysis of all critical concept, requirements, and acquisition documents required as part of the Joint Capabilities Integration and Development system to determine the extent to which the Navy (1) developed specific requirements to address capability gaps and examined materiel and nonmateriel solutions to meet those requirements; and (2) developed a concept of operations that addressed each of the identified requirements as well as critical doctrinal, logistical, and operational considerations. We compared the sources of the requirements for the LCS, such as analyses of military operations based on specific scenarios and threat assessments to the final validated requirements document (Capabilities Development Document), and highlighted each capability gap. We identified the capability gaps in the Navy’s functional analysis for each of the warfare missions—mine warfare, antisubmarine warfare, and surface warfare. This included looking at the Navy’s standards that were used to measure how well the current and programmed joint forces could mitigate the warfare threats in the littorals during a major combat operation. We then reviewed the materiel and nonmateriel solutions identified by the Navy that could be used as alternative solutions for mitigating the gaps. We also conducted a comparative analysis of the Initial Capabilities Document with the validated requirements in the Capabilities Development Document to highlight additional gaps. We also compared the requirements, as developed in the CDD and the Preliminary Design Interim Requirements Document to the LCS operating concepts and capabilities, as developed in the Navy’s two versions of the concept of operations. To assess the Navy’s progress in defining the concept of operations we used a gap analysis, similar to the one used for the requirements, to trace the extent to which the concept of operations were developed. GAO compared the LCS concept of operations to the ship’s requirements (specifically the Capabilities Development Document) and the identified capability gaps to determine if the LCS concept of operations fulfilled the requirements. We also discussed with Navy officials the extent to which they included doctrinal and operational challenges and the Navy’s assessment of where the risks are stemming from these challenges and their mitigation efforts. To assess the progress of technology development in LCS mission packages, we reviewed the basis of the Navy’s estimation of technology readiness and plans to bring these technologies to full maturity. As a part of this assessment we analyzed the Technology Readiness Assessment performed by the Navy and reviewed development and testing plans developed by the program offices. As a measure of technology maturity we utilized Technology Readiness Levels, the same metric used by the Navy in the Technology Readiness Assessment. The standard we used for assessing technology maturity is the demonstration of form, fit, and function in an operational environment. This standard is based on defined technology readiness levels developed by the National Aeronautic and Space Administration and adopted by DOD. Our analysis was supplemented by interviews with officials from the LCS program offices and other Navy programs supporting the mission packages. Our audit focused on technologies for Flight 0, as technologies for Flight 1 have not been selected. To assess the basis of LCS costs we reviewed the cost analyses prepared by the contractors and the LCS program office. We analyzed the basis of costs for design and construction of the seaframe as well as the development and procurement costs of mission packages for Flight 0. Our analysis was supplemented by interviews with the program offices and contractors involved in LCS. Costs for operation of Flight 0 and procurement of Flight 1 have not been estimated. Details of the costs and technologies for the seaframe are sensitive, due to the ongoing competition. We therefore do not discuss these at length. To address our objectives, we visited and interviewed officials from Navy headquarters’ surface warfare requirements office; LCS program offices; mine warfare program office; the MH-60 program office; the Unmanned Aerial Vehicles program office; the Naval Surface Warfare Center, Dahlgren Division; the Naval Undersea Warfare Center; the Naval War College; and the Navy Warfare Development Command. We also interviewed officials from the Office of the Secretary of Defense’s Program Analysis and Evaluation division, General Dynamics, and Lockheed Martin. We conducted our review from July 2004 through December 2004 in accordance with generally accepted government auditing standards. In addition to those named above, Richard G. Payne, Jerome A. Brown, J. Kristopher Keener, Joseph W. Kirschbaum, James C. Lawson, Jodie M. Sandel, Angela D. Thomas, Roderick W. Rodgers, and Bethann E. Ritter made key contributions to this report.
To conduct operations in littorals--shallow coastal waters--the Navy plans to build a new class of surface warship: the Littoral Combat Ship (LCS). LCS is being designed to accomplish its missions through systems operating at a distance from the ship, such as helicopters and unmanned vehicles, and that will be contained in interchangeable mission packages. The Navy is using an accelerated approach to buy the LCS, building the ships in "flights." Flight 0, consisting of four ships, will provide limited capability and test the LCS concept. The schedule allows 12 months between the delivery of the first Flight 0 ship and the start of detailed design and construction for Flight 1 ships. Estimated procurement cost of the Flight 0 ships is $1.5 billion. The Congress directed GAO to review the LCS program. This report assesses the analytical basis of LCS requirements; the Navy's progress in defining the concept of operations; the technical maturity of the mission packages; and the basis of recurring costs for LCS. The formal analysis of requirements for U.S. littoral combat operations--conducted after the Navy established the LCS program--examined a number of options, such as the extent to which existing fleet assets or joint capabilities could be used. While the Navy concluded that the LCS remained the best option, it focused on LCS requirements for combating small boats. The Navy did not conduct an analysis of the impact of larger surface threats LCS may face. Such threats may increase the risk to LCS operations when no other nearby U.S. forces are available to help. The Navy has developed both a broad concept and more detailed plans on how the LCS will be employed. It has also identified a number of challenges that could put the LCS concept at risk, such as manning, logistics, and communications. For example, reduced manning--a key goal of the LCS program--may not be achievable because maintaining and operating the ship's mission packages, such as the MH-60 helicopter, may require more sailors than the current design allows. Further, the Navy has not yet incorporated the numbers of helicopters that will be needed to fulfill LCS's concept of operation into its force structure and procurement plans. If the Navy's efforts to meet these challenges are not successful, the Navy may not have sufficient time to experiment with the Flight 0 ships and integrate lessons learned into planning and designing for follow-on ships. While the Navy designed the first LCS to rely on proven technologies and systems, a number of technologies to be used in LCS's mission packages have yet to be sufficiently matured--that is, they have not been demonstrated in an operational environment--increasing the risk of cost and schedule increases if the technologies do not work as intended. Technologies must also be demonstrated for systems on the LCS seaframe. Other factors may affect the availability of mature technologies and subsystems, such as making the modifications necessary for adaptation to the LCS and transitioning projects from the laboratory to production. Collectively, these technology issues pose an additional challenge to the Navy's ability to sufficiently experiment with Flight 0 ships in time to inform the design efforts for follow-on ships. Procurement costs for the Flight 0 ships remain uncertain. The basis for the seaframe cost target--$220 million--appears to be more defined than for the mission packages, as the Navy has performed various cost analyses that consider the challenges in detailed design and construction. The Navy seeks to meet the cost target by trading between capability and cost. Cost data for the Flight 0 mission packages are not as firm in part because of the uncertainties associated with immature technologies.
Cutters, patrol boats, airplanes, and helicopters are all critical to meeting the Coast Guard’s deepwater missions that are beyond the range of shore- based small boats. These missions include actions such as enforcing fisheries laws, intercepting drug smugglers and illegal immigrants, and conducting search and rescue missions far out at sea. Many of the Coast Guard’s current cutters were built in the 1960s, and many of the aircraft in the 1970s and 1980s. Although these ships and aircraft have been upgraded in a number of ways since being acquired, the Coast Guard has documented a number of performance and support problems, such as the following: poor sensors and night operations capability on both aircraft and cutters, limited ability of cutters and aircraft to operate effectively together, inadequate communications, and high operating and maintenance costs. In a November 1995 mission analysis report, the Coast Guard cited its rapidly aging deepwater fleet as a justification to begin a project for acquiring new ships and aircraft. In 1998, we reported that the service life of the Coast Guard’s deepwater ships and aircraft might be much longer than the Coast Guard originally estimated in its 1995 analysis. We recommended that the Coast Guard develop additional information on the remaining service life of ships and aircraft. In 1998, the Coast Guard determined that the service life of the various aircraft classes could be extended by about 11 to 28 years over original estimates, assuming that increased maintenance and upgrades occur. In addition, by January 2001, the Coast Guard had issued an updated analysis that extended the service life of two of the four ship classes by an additional 5 years, assuming that increased maintenance and upgrades occur. The Coast Guard provided this information to its contractors so that they could use it in developing their proposals. In December 1999, an interagency task force on the roles and missions of the Coast Guard reported that recapitalization of the Coast Guard’s deepwater capability is a near-term national priority and endorsed the Deepwater Project’s process and timeline. Although our earlier work took issue with the Coast Guard’s initial analysis of how soon its deepwater assets would need to be replaced, we do not now take issue with the Coast Guard’s position that it needs to modernize these assets, especially due to the additional studies completed since our 1998 report. Our congressional directive in this report has been to examine the acquisition approach. The acquisition approach for the Deepwater Project is innovative. Rather than using the traditional approach of replacing an individual class of ships or aircraft, the Coast Guard has adopted a “system-of-systems” approach intended to integrate ships, aircraft, sensors, and communication links together as a system to accomplish mission objectives more effectively. The Coast Guard expects this approach will both improve the effectiveness of deepwater operations and reduce operating costs. The project has two basic phases—a design phase (called “concept exploration” and known as phase 1) and a final proposal preparation and procurement phase (called “demonstration and validation/full-scale development” and known as phase 2). Phase 1 began in March 1998. As part of this phase, the Coast Guard contracted with three competing teams of contractors to conceive and begin designing a proposed deepwater system. Each proposal is to be based on meeting a set of performance specifications developed by the Coast Guard. Each team was instructed to develop its proposal on a funding stream of $300 million for the first year and $500 million annually until the project is completed. These amounts are in constant 1998 dollars; actual funding would be higher to account for inflation. Phase 1 ends with each team’s development of a proposed deepwater concept, the functional design for which will be 80-percent complete. In phase 2, which begins in June 2001, the Coast Guard plans to issue a request for proposals (RFP) to the three industry teams to develop final proposals. The current schedule calls for these proposals to be completed and submitted to the Coast Guard during the last quarter of fiscal year 2001. The Coast Guard will evaluate which proposal provides best value for the government as gauged mainly by a combination of improvements in operational effectiveness and minimizing total ownership costs. Other evaluation factors include the technical feasibility of the proposed design and the management capability of the systems integrator. When the deepwater contract is awarded early in 2002, the contract will actually be between the Coast Guard and the prime contractor, known as the “systems integrator,” of the winning contracting team. This systems integrator will be responsible for ensuring that each ship, aircraft, or other equipment is delivered on time and in accordance with agreed to prices. The systems integrator will also be called on to deliver the complete deepwater system in compliance with the Coast Guard’s system performance specifications. The Coast Guard adopted this approach because it does not believe it has the technical expertise or the resources to be a systems integrator. Also, the Coast Guard believes that a team of contractors led by a systems integrator would provide the best method of acquiring a set of ships, aircraft, and other equipment and would optimize improvements in operational effectiveness and total ownership costs. This contracting approach could thus result in a long-term contractual arrangement with a single contractor and its team of subcontractors. The Coast Guard plans to have an initial 5-year contract with the systems integrator. The systems integrator would receive a base award for management and system integration services. Assuming the project proceeds as planned, task and delivery orders for deepwater equipment would be issued by the Coast Guard in accordance with the systems integrator’s implementation schedule. If the performance of the systems integrator is satisfactory for each award-term contract, the Coast Guard plans to award follow-on, award-term contracts (as many as five for successive 5-year, award-term contracts) with the same systems integrator. The Coast Guard plans to negotiate prices with the systems integrator on the follow-on contracts. The Congress is at a critical juncture with the Deepwater Project because the success of the contracting approach rests heavily on the Coast Guard being able to count on sustained funding of about $500 million (in 1998 dollars) for 20 years or more. The contracting approach the Coast Guard has selected is not easily adaptable to lower levels of funding without stretching the schedule and increasing costs. However, there are signs that funding levels may be lower than the planned amount. Although the administration’s budget request for the Deepwater Project for fiscal year 2002 will be about 10 percent less than the project’s planned first-year funding, the average shortfall for fiscal years 2003 to 2006 is about 20 percent. Moreover, much of the funding for fiscal year 2002 is from the Western Hemisphere Drug Elimination Act (P.L. 105-277), a source that will not be available after this year unless the act is extended. Capital funding for the Coast Guard is in competition with many other potential uses of federal funds within the Coast Guard itself, DOT as a whole, and other federal agencies. To accommodate the Deepwater Project, the Coast Guard is proposing to limit spending on its other ongoing capital projects to levels far below where they have been in decades. Given these various budgetary pressures, it appears advisable to have contractors develop their plans around a lesser amount. The Coast Guard’s approach, however, is inextricably tied to the more optimistic option. In using a lower, more realistic funding level aligned with OMB budget projections, the Coast Guard could lessen the risk of future cost increases, schedule stretch-outs, and low system performance levels. The contract approach that the Coast Guard has decided to use for the Deepwater Project depends on a large, sustained, and stable funding stream over the next 2 to 3 decades. The approach is based on acquiring ships and aircraft on the contractor’s proposed schedule so that they will form a “system of systems.” Substantial funding shortfalls cannot only affect the ships and aircraft scheduled for acquisition in the short term; but, they can also set off ripples affecting the acquisition of deepwater equipment for years to come. Adjustments that may be needed include revising the implementation plan for delivering equipment, renegotiating prices for deepwater equipment, and negotiating new cost and performance baselines with the systems integrator. Such adjustments would not only be costly; but, they could also slow the schedule to the point that (1) total ownership costs would rise and (2) advantages projected in the contractor’s proposal, such as improvements in operational effectiveness, would not materialize. At the extreme, funding shortfalls would affect the Coast Guard’s ability to proceed with the contract as well as the agency’s ability to perform its deepwater missions. The decision on funding the Deepwater Project rests ultimately with the Congress, but because this decision has yet to be made, we used the administration’s budget proposal for the Coast Guard (as contained in the budget documents prepared by OMB) as a starting point for analyzing the funding issue. OMB’s budget targets for fiscal years 2002 through 2006 do not propose specific amounts for the Deepwater Project; rather, they provide a single amount for all Coast Guard capital projects. As table 1 shows, this overall total ranges from $659 million (in-year-of expenditure dollars) in fiscal year 2002 to $719 million in fiscal year 2006. Because the Coast Guard has many other capital projects under way besides the Deepwater Project, it must decide how this money will be allocated among them. After receiving the budget targets from OMB in early March 2001, the Coast Guard estimated that the amount available for the Deepwater Project would range from $338 million (in year-of-expenditure dollars) in fiscal year 2002 to $547 million in fiscal year 2006. If the Coast Guard proceeds with its current plans in issuing the RFP, contractors will be instructed to develop plans around a much higher funding stream than is available under the OMB budget targets. For example, the funding stream that the Coast Guard currently plans to use for the project ($350 million the first year and $525 million in subsequent years) is in 1998 dollars. Adjusted for inflation, this figure becomes $373 million in fiscal year 2002 (compared with OMB’s target of $338 million) and $569 million in fiscal year 2003 (compared with the target of $396 million). By the end of fiscal year 2006, the cumulative gap will total $496 million. While this shortfall may not seem so significant in the scheme of the overall budget, this amount is significant in the context of DOT’s total budget, especially given the competition among DOT agencies for available funding—a point that we discuss in more detail below. Figure 1 shows that the annual gap between planned funding and the amount available under OMB budget targets ranges from $35 million to $173 million. Although the Coast Guard may be able to begin the project as planned at the level of funding provided for fiscal year 2002, the success of this first year may provide a false sense of security about how easy it will be to sustain projected funding levels. In fiscal year 2002, spending is relatively low compared with later years. Coast Guard officials said they plan to fully fund the contractor’s share of the planned amount and trim their own administrative expenses related to the project. In subsequent years, when planned payments to contractors rise much more steeply than amounts available, the gap may be far less manageable. In addition, the project’s first-year funding comes mainly from a source that will be soon exhausted. About $243 million of the amount proposed for the Deepwater Project in fiscal year 2002 would come from funds authorized in the 1998 Western Hemisphere Drug Elimination Act (P.L. 105-277), which will expire this year unless it is extended. OMB officials told us that it plans to request additional appropriations under this act in fiscal years 2003 to 2006. Another concern is the potential effect of the Deepwater Project on other Coast Guard capital projects planned or under way. These other capital needs include, for example, modernizing communication equipment used to support search and rescue activities, and upgrading various shoreside facilities, such as boat stations and housing. The overall amount the Coast Guard now plans to spend on these projects is substantially less than the agency indicated in plans just a year ago. For example, in 2000, the Coast Guard’s planning documents proposed spending $475 million on nondeepwater projects in 2005. However, under the current plan, that spending level would drop by more than half, to about $196 million. In part, these proposals reflect the fact that some of the other capital projects, such as the buoy tender project, will be winding down, and the costs of other projects will be absorbed as part of the Deepwater Project. Rather dramatic reductions in other capital projects cannot be explained as easily. For example, estimated spending for improved shore facilities in fiscal year 2005 dropped from $147 million in last year’s plan to $59 million in the current plan. If estimates in the current plan hold true, fiscal year 2006 spending for nondeepwater projects will be at its lowest level in decades and call into question the validity of the agency’s estimates to maintain its current nondeepwater infrastructure. The presence of these other capital needs cannot be forgotten in assessing how ready the Coast Guard is to assume the risks of the Deepwater Project. The fiscal environment in which the Coast Guard must obtain funds for the Deepwater Project and other capital needs is further complicated by competition for funds with other DOT priorities. Obtaining additional funding for the Coast Guard within the DOT budget is likely to be difficult because of competition with other entities within the DOT appropriation, such as the FAA and Amtrak, for available discretionary funding. For example, recent action by the Congress limited FAA’s ability to use a separate funding source (the Airport and Airway Trust Fund) to fund FAA’s operations. As a result, funding for FAA’s operations now competes for the same limited DOT dollars on which the Deepwater Project would rely. FAA also expects its operating costs to increase to $7.4 billion by 2003, a 42-percent increase from 1998 levels. Similarly, Amtrak estimates that its capital needs alone will amount to about $1.5 billion annually through fiscal year 2020, part of which would come from the DOT budget. Outside of DOT, the overall budget process is still driven by caps in discretionary spending. If these caps (which currently cover through fiscal year 2002) are extended as the administration has proposed, funding for the Deepwater Project would have to come from cuts in some other agency or program. The percentage increase in the Coast Guard’s budget request is among the largest of all federal agencies. However, the Coast Guard is basing its plans for the Deepwater Project on another major boost in funding beyond 2002. Thus, for all these reasons, sustaining the Deepwater Project at the funding level the Coast Guard is currently planning to use in its RFP appears to be a difficult task over a sustained period. Our concern about this risk is not new. In several previous reports on the Coast Guard’s planning for the Deepwater Project, we expressed concern about the Coast Guard overestimating the amount of funding that would be available in the future for the project. The Coast Guard agrees that funding for the Deepwater Project is high risk and that it provides limited funding flexibility but believes it should keep its current approach of developing the project around the planned funding stream. Coast Guard managers believe that a deepwater system funded at planned levels provides the optimum system to meet deepwater requirements. The agency also believes that OMB budget targets could rise in the future and that the Congress could appropriate more funds to meet the agency’s capital needs. However, the potential risks are substantial and another strategy appears warranted. That strategy would be to develop a lower funding scenario around which the contractors can develop their proposals. If the project needs to be adjusted to a lower, more realistic funding stream, the time to do so is before the contract proposals are finished later this year. Directing contractors to develop proposals around a lower funding scenario aligned with OMB targets would have several advantages. First, the Coast Guard would have greater opportunity to evaluate which proposal will produce the best value to the government within likely budget constraints. Second, the agency would be in a better position to hold the contractor accountable for delivering a system that meets original schedule and cost estimates if it selects the plan developed at the lower funding level. Using realistic funding expectations will reduce the risk of schedule stretch-outs and cost increases with the contractor in a sole- source environment after the contract is awarded—a situation in which the government’s leverage is reduced because it does not have the benefit of competition for obtaining a fair and reasonable price. Any projection about likely funding levels for a project that lasts as long as the Deepwater Project will involve an element of uncertainty and risk. The Coast Guard’s current funding scenario exacerbates that risk. Because the Coast Guard has not yet issued its request for contractors to submit their best and final proposals, there is still time to mitigate the risk by identifying a lower funding stream that contractors should use in developing their proposals. The deepwater contracting approach that the Coast Guard adopted has never been tried on a contract this large extending over 20 or more years. At the time it was adopted, there was little evidence that the Coast Guard had analyzed whether the approach carried any inherent difficulties for ensuring best value to the government and, if so, what to do about them.We and others who are involved in reviewing this approach, such as OMB and the Office of Federal Procurement Policy, have expressed concerns about the potential lack of competition during the project’s later years and the reliance on a single contractor for procuring so much of the deepwater equipment. The Coast Guard is still conducting this analysis on its approach as it moves into phase 2 of the project and has delayed some of its key milestones to consider these concerns. When the Coast Guard selected the contract approach in May 2000, it had not yet documented the risks involved or the degree to which this approach provided better value than other approaches. Contracting officials within the Coast Guard said their guidance from Coast Guard management had been to develop an approach that would (1) allow a single systems integrator to create a “system of systems” approach and (2) achieve potential improvements in operational effectiveness and minimize total ownership costs. Contracting officials told us that with these parameters in mind, they conducted a limited evaluation of several contracting alternatives by meeting informally with government and private sector officials about the Coast Guard’s proposed approach and meeting internally to discuss possible strengths and weaknesses of three approaches. Documentation detailing the basis for the decision—the depth of the analysis performed, the factors considered, the expertise sought (people contacted), and the compelling reasons why the current approach was chosen—was not recorded prior to its approval by Coast Guard acquisition officials. Without thorough documentation in this regard, the rigor of the Coast Guard’s analysis of the approach is unknown. When we initially reviewed the Deepwater Project proposed contracting approach in March 2000, we expressed concerns about whether it could keep costs from rising and ensure good performance once the contract is awarded. We discussed the Coast Guard’s approach with contracting experts from both the public and private sector who, in addition to their concern about the Coast Guard’s ability to control costs, also raised concerns about certain management-related issues, which we cover later in this report. Presently, we focus on the cost-related issues of concern, namely the potential absence of competition for subcontracts in the project’s later years and the heavy reliance on a single-systems integrator to procure the entire system. OMB guidance recognizes the value of competition as a lever to keep contract costs down. The benefits of competition are present in the contract’s early years, as are other approaches for controlling costs. For the initial 5-year award term contract, prices for equipment and software to be procured are based on competition; and when the contract with the systems integrator is awarded, these prices will be fixed, according to Coast Guard officials. The Coast Guard also hopes to control costs by encouraging the use of commercially available (nondevelopmental) equipment. Prices for such equipment can be determined on the basis of previous orders from other buyers and by the use of fixed-price contracts. Beyond the first 5-year award term contract, however, the benefits of competition are less certain. In a practical sense, the opportunity for competition in the project’s out years is diminished because the systems integrator will likely contract with those suppliers that were part of the team putting together the offer rather than opening the contract to a wider set of offerors. Coast Guard officials currently believe that a profit motive could drive the systems integrator to open competition to a wider set of offerors. Although this is possible, it would be easier to integrate equipment or subsystems acquired from a team member since equipment will be procured based on the plan developed by the team. A Coast Guard analysis of the same issue draws this same conclusion. We believe that this potential lack of competition reduces the normal marketplace control on price and subjects the Coast Guard to situations in which the supplier could potentially drive up project costs. The Coast Guard is attempting to develop strategies for encouraging competition among suppliers, and thereby controlling costs, in subsequent 5-year award term contracts. One approach involves providing incentives for the systems integrator to submit “competitive proposals”—that is, proposals that are reasonably priced—-beyond the first few years of the contract. Contracting experts brought in by the Coast Guard discouraged this approach, saying such incentives usually have limited effectiveness. As a result, the Coast Guard now indicates it will evaluate the systems integrator’s performance in minimizing total ownership costs as part of its decision of whether to renew the systems integrator’s contract. By doing so, the Coast Guard hopes that this will encourage the systems integrator to have competition. At this point, it is not clear what effect this evaluation would have. A second approach the Coast Guard plans to take is to negotiate a ceiling on the amount that will be paid for deepwater equipment in the 5-year period covered by a follow-on, award-term contract. This is a continuation of the approach being taken for the first 5-year contract. However, the ceiling could be waived if the project’s schedule or requirements are changed. Given the funding-related concerns discussed earlier, the potential for such changes cannot be easily dismissed. If such changes occur, the Coast Guard will rely on the systems integrator to negotiate prices with its vendors in a sole-source environment. Although doing so is a valid alternative for pricing a contract, a sole-source environment leaves little leverage in negotiations and therefore carries a higher risk of goods being overpriced. This approach also carries the burden of obtaining and reviewing cost and pricing data from suppliers and the systems integrator. Another cost-related concern involves dependence on the systems integrator for a deepwater system that will take 20 or more years to acquire. This dependence is both one of the main strengths of the approach and one of its main weaknesses. On the positive side, if all aspects of the approach work well, the systems integrator will form a partnership with the Coast Guard and provide the technical expertise to assemble an integrated system and the continuity needed to bring a long- term project to a successful conclusion. However, the approach could establish the integrator as a monopoly supplier, substantially constraining the Coast Guard’s options or leverage. The Coast Guard could be in a weak position to negotiate aggressively on price because of its reluctance to take on the risks of increased costs and other problems associated with switching systems integrators. For example, if the systems integrator’s performance is unsatisfactory, a new systems integrator will have to step in to implement someone else’s partially completed design; or the Coast Guard will have to adopt a more traditional approach of buying individual classes of ships or aircraft, according to Coast Guard officials. The learning curve and other complications involved in such a midcourse adjustment could be dramatic and would probably be very costly. As our work progressed, we expressed our concerns to the Coast Guard immediately rather than waiting until the end of our review. As we raised these concerns, the Coast Guard took additional steps to study them. However, some of these efforts are still under way, and decisions have not been made on all specific measures to be incorporated into the acquisition plan and the RFP for the Deepwater Project. In September 2000, we urged the Coast Guard to take several actions to deal with the risks of the contracting strategy it had selected. We suggested that the Coast Guard identify and evaluate all viable contracting approaches, discuss the approaches with contracting experts, and document the results. In particular, we stated that the Coast Guard should be open to options that would maximize the benefits of competition in later years while still maintaining the interoperability of the system. We also urged the Coast Guard to convene an independent panel of contracting experts from the government and private sector to review the proposed deepwater contracting approach or whatever approach the Coast Guard selected. We felt that given the contract’s uniqueness and the risks it poses, a rigorous review by a widely represented panel of experts was essential to both validate the Coast Guard’s approach, and recommend potential mitigating measures to strengthen it. In December 2000, the Coast Guard proposed a limited peer review—one involving experts only from DOD and GAO and consisting of a 3-hour process (a 1-hour presentation on the contracting approach that the Coast Guard plans to use, followed by a 2-hour question-and-answer session). Because of our concerns about the limited nature of this approach, the Coast Guard—with our advice and assistance—expanded the panel of experts and adopted a more extensive, structured format. OMB officials share our concerns about the contracting approach, and they support the need for a peer review and a careful consideration of issues raised before the RFP and acquisition plan are finalized. We also urged the Coast Guard to provide the experts with key documentation, namely the acquisition plan and excerpts from the draft RFP prior to the meeting. Doing so would better ensure that the members of the panel have an objective basis for evaluating the Coast Guard’s contracting approach. However, the Coast Guard decided not to provide such documents to the panel members in advance, but instead provided them with selected excerpts from the acquisition plan. The entire acquisition plan and RFP was available for the panel upon request. Subsequent to our discussions with Coast Guard management, the Coast Guard contracted with two outside consultants to review the proposed contracting approach for the Deepwater Project. One consultant was tasked to develop and recommend a contracting strategy for the deepwater system, given the Coast Guard’s requirement for an integrated “system of systems” solution. The consultant determined that the Coast Guard should continue with the approach it selected. However, citing cost increases and limited cost negotiation leverage as weaknesses, the consultant identified risk mitigation strategies, such as including in the RFP requirements for increasing competition over the mid to long term. A second consultant evaluated the draft RFP. He noted that this was one of the most complex contracts he had ever seen and suggested that it be simplified. For example, he suggested that the Coast Guard consider using incentives as part of the provisions of the award-term contract rather than as a separate item. He also observed that the success of the contracting approach is dependent on the Coast Guard receiving the planned funding stream. To address the concerns raised by the consultants and to provide some time to respond to additional concerns that might be raised by the peer review, the Coast Guard has altered its planned date for issuing the deepwater RFP. The Coast Guard now plans to release the RFP in June 2001, or about 2 months later than its initial schedule. The Coast Guard is still responding to comments from its consultants and industry. Making necessary revisions to the RFP before giving it to contractors is important, because the RFP represents the contractual basis upon which the Coast Guard and the contractor will develop their relations. Also, changing the RFP after it has been issued could result in contractors having to amend their offers. At this point, we do not know what changes the Coast Guard might adopt. Until adequate steps are in place to address concerns expressed by its consultants and by members of the peer review, we believe the risk related to cost control remains high. The Coast Guard’s success in this area also rests on how well it develops other sound strategies and options for managing potential problems. These strategies are discussed in the next section. Another area of potential risk involves the overall management and day-to- day administration of the contract. In this regard, the Coast Guard’s performance during the planning phase has been generally excellent. During this phase, the Coast Guard took several innovative steps to establish and communicate what it wanted contractors to do, and it had adequate processes and trained staff in place to carry out the management tasks that needed to be done. As the project moves into the procurement phase, these challenges become more difficult, in large part, because the scope of work is so much greater and the contracting approach is unique and untried. It is too early to know if the Coast Guard can repeat the same strong performance on this much larger scale, because plans for managing and administering the deepwater contract are still being developed. The major challenges the Coast Guard faces involve developing and implementing plans for (1) establishing effective human capital practices, (2) having key management and oversight processes and procedures, (3) forming close relationships with subcontractors, (4) funding useful segments of the project, (5) tracking data to measure contractor performance, and (6) having an exit strategy and a contingency plan in the event of poor performance by the systems integrator. In the planning phase of the project, the Coast Guard applied a number of “best practice” techniques recommended by OMB and others. Among them are the following: The Coast Guard gave contracting teams mission-based performance specifications, such as the ability to identify small objects in the ocean, rather than asset-based performance specifications, such as how large a cutter should be, and then it gave them leeway in deciding how to meet these specifications. Specifying performance criteria is the more traditional approach. The Coast Guard established a management structure of Coast Guard and contractor teams for rapidly communicating technical information. Among other things, these teams assess each contractor’s evolving proposal to determine if it will meet contractual requirements and identify issues that could potentially have unacceptable effects. Communication mechanisms include an Internet Web site. The Coast Guard highlighted the use of “open-system architecture” and emphasized the use of commercially supported products in the equipment to be acquired. This means that communication and computer equipment can be more easily replaced and upgraded without proprietary software or other unique requirements. The Coast Guard also had effective procedures and a management structure in place for this phase of the project. Using a model developed by Carnegie Mellon University, we assessed the procedures and structure in eight key areas—planning, solicitation, requirements development and management, project management, contracting and oversight, evaluation, transition and support, and risk management. Within these 8 areas, we examined 112 key practices and found no significant weaknesses. In fact, the Coast Guard’s procedures and management structure for these eight areas were among the best of all the federal agencies we have evaluated using this model. This provides a good foundation for developing and implementing sound procedures for the next phase of the project; however, in many ways, the challenges will be more difficult. As the project moves from the planning phase to the procurement phase, the Coast Guard must ensure that it can perform project management and contract administration activities at a high level, given the complexity and scope of the contract and its uniqueness. Under the Coast Guard’s planned approach, the systems integrator will be responsible for program management required to implement the deepwater system, and the Coast Guard will continuously monitor the integrator’s performance. The Coast Guard plans to implement, or require the systems integrator to implement, many management processes and procedures based on best practices, but these practices are not yet in place. Because much work remains to be accomplished in this area, the full effectiveness of the Coast Guard’s approach cannot be assessed in the short term. The following are the key areas that will need to be addressed. A critical element to the ultimate success of the project is having enough trained and knowledgeable Coast Guard staff to conduct management and oversight responsibilities. Project officials view this as a high-risk area and one of the most important aspects of the project. The Coast Guard hopes to have its full complement of staff needed for fiscal year 2002 by the time the contract is awarded. Currently, the Coast Guard has 69 personnel devoted to the Deepwater Project. According to project officials, the current project staff is highly qualified—most have advanced degrees in management, engineering-related, and other specialty fields. Moreover, the Coast Guard has made a conscious effort to maintain the continuity of its project staff by not rotating its military personnel on the project to new positions every 4 years as it normally does. In addition, the Coast Guard has assigned a Project Executive Officer to head the project. Project officials have identified the need for 62 additional positions to manage the project beginning in fiscal year 2002. In addition, the officials plan to hire civilians with acquisition and contracting experience and to use support contractors for many activities. The Coast Guard is also in the process of developing a training plan for its project staff; it hopes to complete the plan later this year and ensure that all staff meet the training requirements for their respective positions by the time the contract is awarded. Under its deepwater acquisition approach, the Coast Guard will rely heavily on the systems integrator to establish a management organization and systems necessary to manage the major subcontracts for deepwater equipment. The systems integrator will be required to apply an integrated product and process development approach, including teams consisting of Coast Guard, contractor, and major subcontractor personnel who are responsible for specific areas of the program. Also, the systems integrator will be responsible for developing key systems and processes, such as risk management, quality assurance, test and evaluation, and earned-value management systems. In addition, the Coast Guard is developing a program management plan to oversee the systems integrator. The major components of this plan are project planning; organization; and detailed planning documents, including individual plans for contract management, information management, and financial management. Although the Coast Guard plans to complete the program management plan before the contract is awarded, project officials told us that some individual plans, such as configuration and integrated logistics support plans, are dependent upon the system selected and cannot be finalized until after the contract is awarded. Because the use of major subcontractors to provide high-value equipment will be such an intricate part of the Deepwater Project, good relations and communications between the Coast Guard, the systems integrator, and the major subcontractors will be very important. Our past review of best practices on this issue suggests that leading organizations establish effective communications and feedback systems with their subcontractors to continually assess and improve both their own and supplier performance. These practices not only helped key subcontractors to fully understand the firms’ goals, priorities, and performance assessments, they also helped the firms to understand subcontractors’ ideas and concerns. Our experience in evaluating DOD acquisition programs showed that it was important to establish such relationships not only with prime contractors but with subcontractors as well. For example, supplier relationships on one program we reviewed reflected DOD’s traditional role of distancing itself from subcontractors. This role was traced, in part, to the fact that DOD had not articulated a particular subcontractor policy to guide program managers. We recommended—and DOD agreed—that DOD establish a policy and incorporate it into acquisition plans for major procurements. The Coast Guard has developed no general policy on subcontractor relationships. Major subcontractors will be part of the integrated product and process development teams, and the Coast Guard plans to perform quality assurance activities at subcontractors’ facilities. However, according to project officials, the program management and quality assurance plans have not been completed, and it is not clear at this time what the quality and nature of the Coast Guard’s relationship with subcontractors will be. OMB Circular A-11, Part 3, emphasizes that each useful segment (e.g. an entire ship or an entire aircraft) of a capital project should be fully funded in advance of incurring obligations. The Coast Guard has told its contractors to develop their deepwater schedules by using full funding of useful segments rather than incremental funding. Coast Guard contracting officials have said that they plan to obtain full funding for a ship or aircraft before proceeding with their procurement. However, if deepwater plans need to be adjusted due to a shortfall in funding or changes in program requirements, according to the officials, one option could be to develop requests that fund only part of a ship or aircraft. We found in a review of earlier Coast Guard budget justifications that the Coast Guard had proceeded with some capital projects before the amount of full funding was obtained. According to OMB, proceeding with such incremental funding could result in schedule delays or higher costs for capital projects. As the Coast Guard proceeds with the Deepwater Project, it should ensure that its budget requests are consistent with OMB guidelines on full funding of useful segments to avoid attendant delays and increased costs. The Coast Guard plans to award follow-on, award-term contracts on the basis of factors such as improving operational effectiveness and minimizing total ownership costs. To measure the performance of the systems integrator in achieving these goals (as a basis for awarding the follow-on contracts), the Coast Guard will use a simulation model to measure improvements in operational effectiveness and will compare the contractor’s actual cost reductions with their proposed costs. According to Coast Guard officials, they will develop a new baseline for these factors on the basis of the winning contractor’s plans and the most current information on deepwater equipment after the contract is awarded in early 2002. Coast Guard officials told us that they plan to use a subjective rating system to assess the contractor’s performance rather than use database benchmarks for improvements in operational effectiveness and total ownership costs. According to Coast Guard officials, setting such benchmarks may be difficult because performance data may reflect factors that did not result from actions of the contractor. For example, improved intelligence on drug smugglers could result in improvements in operational effectiveness. Also, changes in fuel costs could cause operational costs to increase. Because a host of factors could cause changes in these data, it will be important for the Coast Guard to carefully track these measures and accurately identify and segregate reasons for the changes that occur. Doing so would better show the results of significant federal investments in ships and aircraft. Given the Coast Guard’s heavy reliance on a single systems integrator for so many facets of the Deepwater Project, the agency is at serious risk if— for whatever reason—the systems integrator does not perform as expected or decides to walk away from the project on its own. For example, if the systems integrator’s performance falls short of expectations, the Coast Guard will face numerous options, ranging from closer oversight to termination of the contract. Faced with these options, having a carefully thought-out contingency plan, which identifies and analyzes the implication of potential actions, would solidify the Coast Guard’s ability to respond effectively. Several high-level federal contracting officials echoed this position, saying that given the circumstances for this particular project, exit strategies and other means to deal with potential poor performance by the systems integrator were highly desirable. In the extreme case—where the contractual relation with the systems integrator is terminated—an exit strategy identifying possible alternatives, consequences, and transition issues would be important. In this regard, contracting officials with the project told us that the contract will provide several “off-ramps” and that the Coast Guard has basically two options if it were to terminate the systems integrator: (1) obtain a new systems integrator and a new set of subcontractors as well and (2) revert to the traditional “stovepipe” procurement approach of procuring a single class of vessels and aircraft at a time. These officials said that from a project management standpoint, having a strategy to deal with options like these is important; and the agency is currently documenting, with assistance from a contractor, the pros and cons of each exit strategy. However, the officials noted that specific, detailed plans to implement the options would not be developed until it was known that the Coast Guard planned to terminate the contract. The risks associated with incorporating new unproven technology into the first part of the Deepwater Project are minimal, in part, because of the Coast Guard’s emphasis that industry teams use technology that has already been proven in similar applications. Our main concern is the absence of criteria to measure the risk of the new technology that needs to be developed, both now and in the project’s later years. Too little assessment of the risks associated with developing new technology has caused problems on many acquisition projects, both in government and the private sector. OMB’s Capital Programming Guide (A- 11) states, “Probably the greatest risk factor to successful contract performance is the amount of technology development that is planned for the procurement.” Minimizing a technology’s unknowns and demonstrating that it can function as expected significantly reduce such risk. We have found that leading commercial companies use disciplined processes to demonstrate—before fully committing to product engineering and development—that technological capability matches project requirements. Waiting to resolve these problems can greatly increase project costs—at least 10-fold if the problems are not resolved until product development, and as much as 100-fold if they are not resolved until after production begins. The Coast Guard has taken steps to minimize these risks. One major step was to emphasize in contracting documents to industry teams that, to the maximum extent possible, proposed assets, systems, equipment, and components are to be nondevelopmental or commercially available (off- the-shelf) items. Our review showed that the teams’ preliminary proposals included many commercial off-the-shelf and nondevelopmental items currently operating in the commercial or military environment. However, some proposed equipment included developing technology that has not yet been proven. Generally, these developing technologies are at the prototype level and are undergoing performance testing and evaluation prior to contract award to commercial and military customers. The Coast Guard’s steps are helping to keep the risk of unproven near- term technology at a low level. We measured the maturity level for the project’s most critical near-term technologies (those introduced in the first 7 years of the project), using an approach developed by the National Aeronautical and Space Administration (NASA). We applied this process, referred to as technology readiness levels (TRL), to 18 technologies identified as critical by the 3 contractor teams and the Coast Guard. We determined—and the Coast Guard concurred—that by the time the contract is awarded, 16 of the 18 are expected to be at a level of acceptable risk. The remaining two technologies will be slightly higher in risk; but in one case, an early prototype is being tested, and in the other, a proven backup system has been identified that, if needed, could replace the technology with no effect to the project’s cost, schedule, or performance. Entering phase 2 of the project with critical technologies at a high level of maturity or with proven backup systems significantly lowers risk and the likelihood of delays, which in turn helps to control program costs. Although technological risks appear minimal in the near term, the Coast Guard lacks criteria for assessing the maturity of technology in the longer term. The Coast Guard has a risk-management plan in place, as well as a process to identify, continuously monitor, and assess technology risks; and the resources the Coast Guard expects to commit to the task during phase 2 appear to be adequate. What the process lacks, however, is uniform and systematic criteria for judging the level of technology maturity and risk, such as the TRL ratings in the approach we adopted from NASA. In contrast, since January 2001, DOD has required the use of TRL criteria as a tool for measuring the technology readiness of its procurement projects. Such criteria are important for monitoring both continued development of the technologies we examined and the development of other technologies that will not be used until later in the project. As of July 2000, when we completed our TRL assessment, half of the 18 deepwater key technologies we reviewed were still below the maturity level considered an acceptable risk for entering production. Before the contract is awarded, the Coast Guard must assess the readiness of these technologies. In addition, the industry team proposals include numerous technologies that are planned for deepwater system introduction from 2009 to 2020—well after contract award. Many of these future technologies will not be proven at contract award and will need to be assessed for technology risk before acceptance. The Coast Guard plans to have a test and evaluation master plan in place by June 2001, but it is not planning to include a requirement for using TRL criteria to measure technology readiness in that plan. The Coast Guard’s acquisition strategy and contracting approach for the Deepwater Project are innovative. The agency plans to use the full flexibility provided by congressional reforms of the federal acquisition process to avoid the all too frequent failures of major federal acquisitions in the past. Despite the numerous commendable innovations during the concept development phase, we remain concerned that considerable risks remain with its chosen approach for the procurement phase of the acquisition. The Coast Guard’s contracting approach for the production phase of the deepwater acquisition is unique—relying on a single contractor to manage, build and integrate the modernization of its entire deepwater fleet over a period likely to exceed over 2 decades. The key promise of the approach is achievement of a fully integrated system that both maximizes improvements in operational effectiveness and minimizes total ownership costs (including not only the acquisition, but operation, maintenance, and support costs of the entire system over a 40-year period). While we recognize the merit of exploring innovative and even unique approaches, we believe the selected approach puts at risk precisely the purported benefits of the approach—that is, maximizing operational effectiveness and minimizing operational costs. Development of this unique and untried approach on such a large scale and for an acquisition so critical to the Coast Guard’s ability to perform every aspect of its deepwater mission puts a heavy burden on the Coast Guard. Not only would it be reasonable to expect a rigorous effort to identify and mitigate all the major potential risks associated with a totally new approach, but the Coast Guard would also need to ensure that other approaches were fully evaluated. Unfortunately, we found that the Coast Guard has yet to accomplish either. At our urging, the Coast Guard has only recently sought to set up a systematic effort to identify and mitigate risks associated with its chosen approach and the evaluation of alternative approaches remains limited and poorly documented. We remain concerned that the Coast Guard will soon be making critical decisions regarding the Deepwater Project, namely issuing the RFP in less than 2 months and awarding a contract to procure deepwater equipment in less than a year. Yet, significant risks still exist, and the Coast Guard has not completed actions to fully address them. The unique contracting approach of relying on a single systems integrator to manage, acquire, and integrate all Deepwater assets and capabilities poses two major risks, both of which still remain. First, the agency’s choice of a contracting approach is now inextricably tied to a projected deepwater funding level of over $500 million annually for the next 2 to 3 decades. Attaining sustained funding for the project at this level looms as the major potential problem. By choosing to proceed with a funding scenario that appears to be unrealistically high in the face of budget projections that are substantially less, the Coast Guard is increasing the risk that the project will incur future cost increases and schedule stretch-outs. Second, the Coast Guard’s reliance on a single contracting team raises serious questions regarding the Coast Guard’s ability to control costs and ensure performance once the contract is awarded. Their strategy for adequately controlling costs in the project’s later years is still being worked out and requires careful attention before the RFP is issued. Similarly, the Coast Guard is still developing plans for managing the contract, and much remains to be done. These are risks that need to be well understood and resolved before the RFP is issued. Moving ahead before addressing the major risks and evaluating options for addressing them, potentially including an evaluation of alternative approaches, would be unwise. The Coast Guard’s acquisition approach for the Deepwater Project—and its reliance on a large and sustained funding level over a long period— makes the Congress’ next decision on the project crucial as well. This decision goes far beyond deciding what Coast Guard equipment needs to be replaced or modernized. The Congress is in effect being asked to provide the first installment based on the Coast Guard’s spending plan for the project, which is essentially dependent on a continuous funding stream in excess of $500 million annually for decades. Allowing the project to proceed as planned and then later reducing that funding level significantly would result in higher system costs and reduced system performance. We think this is the central risk posed by the current approach, and that the Congress needs to make its decision about providing funding for the project this year with clear knowledge that the Coast Guard’s chosen contracting strategy depends heavily on a sustained high level of funding for at least the next 20 years. We recommend that before the Coast Guard issues the RFP for the Deepwater Project, the Secretary of Transportation should ensure that a realistic level of funding, based on OMB budget targets, the Coast Guard’s capital planning process, and congressional guidance is incorporated into the RFP and used by contractors as the basis for designing their proposed systems; and direct the Commandant of the Coast Guard to carefully consider and incorporate recommendations, if any, made by the peer review panel into the deepwater acquisition plan and RFP or if the peer review panel finds serious and unmitigated risks in the Coast Guard’s approach, evaluate alternative contracting strategies that could address the risks. Before the Coast Guard signs a contract with the systems integrator for the Deepwater Project, we recommend that the Secretary of Transportation should direct the Commandant of the Coast Guard to address the following issues: complete development of the Program Management Plan, including plans and procedures to (1) facilitate relations with subcontractors, (2) ensure that the project is adequately staffed and that the staff is properly trained to perform their respective project management responsibilities, and (3) cover actions to be taken in the event that the Coast Guard decides not to continue its contract with the systems integrator; complete plans for ensuring that annual budget requests for the Deepwater Project are for useful segments and that a mechanism is in place for reporting to OMB and the Congress, as part of its annual budget submission, the progress that is made in achieving baseline goals of minimizing costs and improving operations due to investments in funding the Deepwater Project; and select a process, such as the technology readiness levels approach, for assessing the technology readiness of equipment and major systems to be delivered. The success of the contracting approach the Coast Guard selected for the Deepwater Project relies heavily on the Coast Guard being able to sustain the funding level around which the contractor’s proposal is based. Substantial and prolonged funding below that level will lead not only to cost increases and schedule slippages, but also to situations in which the Coast Guard’s ability to achieve its missions may be jeopardized. To avoid these situations, the Congress should have the opportunity to weigh in on the affordability of the project before the contract is awarded. Therefore, the Congress may wish to direct the Secretary of Transportation to (1) ensure that any funding scenario included in the RFP is based on OMB budget targets as well as discussions with appropriate congressional committees, (2) submit a report to Congress justifying the funding scenario and explaining any variations from the funding projections of OMB, and (3) wait 30 calendar days from submission of the report before issuing the RFP. We provided a draft of this report to the Department of Transportation and the Office of Management and Budget for their review and comment. In commenting on our draft report, DOT disagreed with our recommendation to incorporate more realistic levels of funding for the project into the RFP based on OMB’s budget targets. In support of its position, DOT noted that OMB out-year funding targets have been converging with estimated project requirements during the last year, and it believes that OMB targets will change in the future to better match project requirements of $500 million annually. DOT’s position in this regard is counter to good capital planning and OMB guidance that says that agencies should plan projects within available funding levels. As noted in the report, the Coast Guard faces the real possibility of a cumulative funding shortfall of almost half a billion dollars, or over 20 percent of the total funding needs for the project’s first 5 years. Ultimately, by the Coast Guard’s own admission, funding levels significantly below project requirements would most likely lead to cost increases and schedule slippages and jeopardize the agency’s ability to achieve its missions. DOT agreed with two recommendations and did not comment on two others. The agency agreed to evaluate and incorporate into the RFP as appropriate recommendations from the peer review panel on its contracting approach for the Deepwater Project. Also, the agency agreed to complete development of the Program Management Plan prior to awarding the contract for phase 2. DOT had no comment on two other recommendations, which focused on (1) ensuring that its annual budget requests are for useful segments and that a mechanism is in place for reporting to OMB and the Congress on the progress in achieving baseline project goals and (2) selecting a process for assessing the technology readiness of equipment and major systems to be delivered. DOT’s written comments and our response are in appendix V. We met with officials from OMB, including the Chief, Transportation Branch. OMB concurred with our recommendations but believed that additional actions may be warranted. OMB has concerns about the deepwater acquisition strategy and believes that a broader evaluation of alternative strategies is needed. The agency indicated that the Coast Guard has chosen an approach that relies on a required funding level each year, and OMB has the same concerns that we do about the potential impact on the project if funding does not materialize as expected. OMB is also concerned that this approach sets up a situation where the administration and the Congress would have to fund the project in later years at the planned level, regardless of other competing priorities. Essentially, OMB believes that the deepwater funding strategy transfers the risk of program failure to external sources, such as the Congress. According to OMB, future funding levels cannot be guaranteed, and it would be inappropriate for the Coast Guard to use funding levels in the RFP that are not consistent with OMB’s targets. Under the current acquisition approach, if sustained funding is substantially less than planned, the Coast Guard would have to rebaseline the project in a sole-source environment, a situation that could increase project costs even further. Finally, OMB raised these concerns at the peer review panel meeting; however, OMB is not optimistic that the Coast Guard will sufficiently recognize and adequately address its concerns prior to issuing the RFP. We generally share OMB’s concerns and have made many of the same points throughout our report. We plan to provide copies of this report to the Honorable Norman Y. Mineta, Secretary of Transportation; Admiral James M. Loy, Commandant of the Coast Guard; and the Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget. We will also send copies to others upon request. If you or your staff have any questions about this report, please call me at (202) 512-2834 or Randall Williamson at (206) 287-4860. Other key contributors to this report are listed in appendix VI. This report examined the major risks associated with the Deepwater Project and the progress the Coast Guard has made in addressing them. Our work focused on four risks: (1) planning the project around annual funding levels far above what the administration has told the Coast Guard it can expect to receive, (2) keeping costs under control in the contract’s later years, (3) ensuring that procedures and personnel are in place for managing and overseeing the contractor once the contract is awarded, and (4) minimizing potential with developing unproven technology. To assess the funding risk, we reviewed OMB Circular No. A-11, Part 3 (Planning, Budgeting, and Acquisition of Capital Assets); OMB’s Capital Programming Guide, the Coast Guard’s 5-year capital plan; and the agency’s past budget requests for capital projects. We also reviewed various deepwater planning documents, including the risk management plan, the draft acquisition plan, and requests for proposal. We reviewed prior Coast Guard appropriations and pertinent laws affecting the Coast Guard’s budget. We also reviewed DOT Inspector General reports on the Deepwater Project. We interviewed DOT and Coast Guard officials involved in forming the Coast Guard’s budget, including the Coast Guard’s Director of Resources. We also interviewed OMB budget officials and officials from the Office of Federal Procurement Policy. To assess the risks of controlling costs, we reviewed the Federal Acquisition Regulation; OMB Circular A-11, Part 3, (Planning, Budgeting, and Acquisition of Capital Assets); and OMB’s Best Practices for Multiple Award Task and Delivery Order Contracting. We reviewed the Coast Guard’s draft acquisition plan and RFP for the deepwater phase 2 contract, reports that the Coast Guard received from consultants it hired to evaluate its acquisition strategy, and numerous Coast Guard documents regarding how the agency planned to acquire deepwater assets. We interviewed numerous contracting officials, including, Coast Guard contracting officials, officials from OMB’s Office of Federal Procurement Policy, the former Deputy Under Secretary of Defense for Acquisition Reform, the Deputy Director for Defense Procurement, the Chief of the Internal Revenue Service’s Contracting Branch, and an official of a private consulting firm. We also drew from our extensive agencywide contracting experience in reviewing DOD and other agency procurements. To determine the risk involved in managing the contract, we assessed the Coast Guard’s project management during the planning phase of the Deepwater Project and identified challenges the Coast Guard will face during the procurement phase. To identify best practices in contract management and administration, we reviewed OMB Circular No. A-11, Part 3, and drew from our extensive agencywide contacting experience in reviewing DOD and other agency procurements. We reviewed the Coast Guard’s Project Management Plan, Risk Management Plan, and other management plans to identify the Deepwater Project’s organizational structure and key management procedures used during the planning phase. We assessed the effectiveness of these procedures and structure using Carnegie Mellon University’s Software Engineering Institute’s Software Acquisition Capability Maturity Model® and its Software Capability Evaluation method. Although the model is specifically designed to determine software acquisition process maturity, its application can be used for the acquisition of any type of asset (ships, aircraft, etc.). The model ranks organizational maturity according to five levels. Maturity levels 2 through 5 require the verifiable existence and use of certain acquisition processes, known as key process areas. Satisfying the requirements of maturity level 2 demonstrates that an organization has the policies needed to manage a project and the procedures needed to implement those policies. We evaluated the acquisition processes for two Deepwater Project matrix product teams against all seven level-2 areas (planning, solicitation, requirements development and management, project management, contracting and oversight, evaluation, and transition and support) and one level-3 area (risk management). Within these 8 key process areas, we examined 112 key practices to determine their strengths or weaknesses. We reviewed the Coast Guard’s draft acquisition plan and RFP for the Deepwater Project phase 2, comments that the Coast Guard received from the consultants it hired to evaluate its acquisition strategy, and other documents to identify how the agency plans to manage and administer the procurement phase of the Deepwater Project. We discussed these management plans with Coast Guard contract and Deepwater Project officials, the DOD Deputy Director for Defense Procurement, the Chief of the Internal Revenue Service’s Contracting Branch, and a representative from a private consulting firm. To assess the risk of using new technologies, we asked each of the three competing deepwater contracting teams to first develop a list of the most critical technology and keystone systems being proposed as “near-term” deepwater contract deliverables to be introduced during the first 7 years (2002 through 2008) after contract award. Eighteen technologies and systems were identified, including assets and components representing deepwater aviation, surface, and command, control, communications, computers, intelligence, surveillance, and reconnaissance concept solutions. We then asked the contracting teams to assess “technology readiness” for each of the items they identified on their lists using NASA’s technology readiness level (TRL) criteria. TRLs provide a standard definition of nine levels of technology maturity that can be used to measure technology readiness, regarding the type of demonstration that must be achieved; the scale (form, fit, and function) of the asset; and the operational environment in which demonstration is performed. We asked the contracting teams to score technology readiness at three points in the deepwater acquisition process—July 2000, April 2001, and January 2002. We focused our analysis on the technology readiness level at the date of contract award, January 2002. We independently met with program managers from each of the three industry teams to discuss the status of each technology/keystone system, identify the rationale for the initial TRL score assessment, and determine whether adjustments in the TRL score were necessary. On the basis of these discussions, we made adjustments to the initial TRL scores that the competing contractors agreed were consistent with the TRL criteria. We then crosswalked the TRL scores to project risk criteria established by the Air Force Research Laboratory that predicts project risk on the basis of technology readiness at program decision points. Specifically, the Laboratory established that a technology/key system should be at TRL 7 at the time the decision is made for a program to enter the Engineering and Manufacturing Development Phase—a phase we believe is comparable to the January 2002 deepwater contract award for “near-term” technology and keystone systems. Appendix II: Current Deepwater Cutters and Aircraft This is the largest multipurpose cutter in the fleet. It has a planned crew size of 167, a speed of 29 knots, and a cruising range of 14,000 nautical miles. The Coast Guard operates each cutter for about 185 days a year, and it can support helicopter operations. This cutter has a planned crew size of 100, a speed of 19.5 knots, and a cruising range of 10,250 nautical miles. The Coast Guard operates each cutter for about 185 days a year, and it can support helicopter operations. This cutter has a planned crew size of 75, a speed of 18 knots, and a cruising range of 6,100 nautical miles. The Coast Guard operates each cutter for about 185 days a year, and it can support operations of short-range recovery helicopters. This patrol boat has a crew size of 16, a speed of 29 knots, and a cruising range of 3,928 nautical miles. The Coast Guard operates each for about 1,800 hours a year. The 213-foot medium-endurance cutter, commissioned in 1944, has a planned crew size of 64. The 230-foot medium-endurance cutter, commissioned in 1942, has a planned crew size of 106. The 282-foot medium-endurance cutter, commissioned in 1971, has a planned crew size of 99. This is the largest aircraft in the Coast Guard’s fleet. It has a planned crew size of seven, a speed of 290 knots, and an operating range of about 2,600 nautical miles. The Coast Guard operates each of these aircraft for about 800 hours every year. This is the fastest aircraft in the Coast Guard’s fleet. It has a planned crew size of five, a speed of 410 knots, and an operating range of 2,045 nautical miles. The Coast Guard operates each for about 800 hours a year. This helicopter is capable of flying 300 nautical miles off shore, remaining on scene for 45 minutes, hoisting six people on board, and returning to its point of origin. The Coast Guard operates each for about 700 hours a year. It has a planned crew size of four, a maximum speed of 160 knots, and a maximum range of 700 nautical miles. This helicopter is capable of flying 150 nautical miles off shore. It has a crew allowance of three, a maximum speed of 165 knots, a maximum range of 400 nautical miles, and a maximum endurance of 3.5 hours. The Coast Guard operates each for about 645 hours a year. In our 1998 study on the Deepwater Project, we found that the Coast Guard had substantially understated the service life of its aircraft and, to a lesser extent, its ships. For example, in its project justification prepared in 1995, the Coast Guard estimated that its aircraft would need to be phased out starting in 1998. However, in 1998, a Coast Guard study showed that with proper maintenance and upgrades, its aircraft would be capable of operating until at least 2012 and beyond. Also, a September 1999 study revised earlier estimates and concluded that the Coast Guard’s deepwater cutters have a service life until 2007 and beyond, assuming that adequate funds remain available for maintenance support and service life upgrades. Shown below are the differences in the service life of its deepwater ships and aircraft between the initial estimates (in the1995 justification) and later studies. During Phase 1 of the Deepwater ProjectLitton/Avondale Industries (Systems Integrator) Boeing-McDonnell Douglas Corporation John J. McMullen & Associates, Inc. DAI, Inc. Raytheon Systems Company Lockheed Martin Naval Electronics and Surveillance Systems (Systems Integrator) Lockheed Martin Aeronautical Systems Lockheed Martin Electronics Platform Integration - Oswego, NY Lockheed Martin Global Telecommunications Lockheed Martin Management and Data Systems Sanders, A Lockheed Martin Company Litton/Ingalls Shipbuilding Litton/PRC M. Rosenblatt & Son Bell Helicopter Textron, Inc. Halter-Bollinger Joint Venture, L.L.C. Acquisition Logistics Engineering L-3 Communications East PROSOFT Whitney, Bradley and Brown, Inc. Science Applications International Corporation (Systems Integrator) Marinette Marine Corporation Sikorsky Aircraft Corporation Soza & Company, Ltd. Bath Iron Works AMSEC Fuentez Systems Concepts, Inc. Gibbs & Cox, Inc. The following are GAO’s comments on DOT’s letter dated April 19, 2001. 1. Our report notes that the Coast Guard took many innovative steps and recognizes that the agency’s procedures and management structure for the planning phase of the Deepwater project were excellent. While its management during the planning phase provides a solid foundation for the project, the acquisition phase presents considerably tougher challenges. By almost everyone’s assessment, the acquisition strategy is a high risk, untried approach to procure deepwater assets. Whether the Coast Guard has adequately addressed these risks will not be known for years to come. Furthermore, the Coast Guard’s handling of remarks and suggestions made by members of the peer review are largely unknown at this point. 2. The contracting approach lacks flexibility in several key areas. First, it requires sustained funding at planned levels of more than $500 million for 2 or more decades. Second, it offers no true means to ensure competition for major components as a lever to minimize costs. Third, if planned funding levels are not realized, it opens the door to added costs because the Coast Guard would have to renegotiate costs and delivery dates—all in a sole source environment. Finally, changing the systems integrator after the contract is awarded—while doable— would likely be costly both in terms of dollars and delays in the project. 3. The added dollars expected from the Western Hemisphere Drug Elimination Act has allowed the budget targets to increase substantially from prior year (fiscal year 2001) targets. However, OMB told us that had it not been for the act, the large increase in the Coast Guard’s targets for capital projects would have been difficult to achieve given the budgetary environment. While targets may increase somewhat in future years, any large increases would require new funding sources or shifts in funding from other entities, such as FAA and Amtrak, which also have critical capital needs. Already, the funding requirements for the project are almost half a billion dollars more than OMB budget targets through 2006. Given the uncertainty of future funding, it would be unwise and fiscally imprudent for the Coast Guard to blindly proceed with an RFP that contains a planned funding stream of $500 million, hoping that funding at planned levels will materialize later. OMB echoed our position on this issue. 4. While the Coast Guard has the flexibility to alter project plans based on reduced funding levels in future years, the Coast Guard would likely pay dearly for this. The Coast Guard recognizes this but steadfastly opposes including a lower, more realistic funding level in its RFP. It has essentially rejected our concerns and those of OMB in this area and has adopted a position that runs counter to sound “best practices” for capital planning that are based on widely-accepted OMB guidance. 5. The Coast Guard’s characterization of the peer review panel’s deliberations and findings are overly optimistic and overstate the positive results from the panel. Our review of the transcript of the panel’s deliberations showed that there was not the unanimous consensus among panel members on the efficacy of the acquisition approach that was portrayed by the Coast Guard. For example, the panel member from the Office of Federal Procurement Policy voiced numerous concerns about whether a thorough and honest risk analysis of the acquisition approach had been done and whether adequate mitigation and management plans are in place. Another member also echoed this position, while another remarked that much work is needed before the RFP should be issued to contractors. We believe that such concerns by panel members do indeed refer to potentially serious and unmitigated risks that should not be dismissed lightly. In addition, given that panel members were not given the RFP or the acquisition plan prior to the panel meeting, we question the thoroughness of the panel’s results and the depth to which it explored key questions. It is evident from the transcript of the panel discussions and our observations of the proceedings that panel members may not have had a good grasp or understanding of many issues in the depth necessary to make informed observations and suggestions. For example, one panel member remarked in his summary at the end of the panel discussion that there was an information void on some issues when the panel began discussions, and having more detailed information on the acquisition strategy ahead of time would have been useful. OMB officials who observed the peer review session told us that they felt the same way. Moreover, the panel members were not asked to determine whether this approach represents the “best approach among all possible alternatives,” nor were panel members given the time or the information necessary to make such a determination. 6. To provide funding for the Deepwater Project, the Coast Guard will likely have to keep funding for other capital projects at levels which would be substantially lower than levels experienced over the last decade or more. It is unrealistic to believe that other non-deepwater capital needs will be minimal for the entire duration of the Deepwater Project. The DOT Office of Inspector General, for example, has recently identified millions of dollars of potential capital projects associated with the Coast Guard’s search and rescue program. Also, in its current fiscal year 2002 capital plan, the Coast Guard may have significantly understated amounts needed for information technology and other projects. For example, the current plan projects information technology funding needs of only $3 million in 2005; its capital plan of just a year ago cited information technology project needs of $31.4 million in 2005. Similarly, estimates of funding needs for shore facilities were $128.8 million in the 2001 plan and only $58.7 million in the fiscal year 2002 plan. Either the Coast Guard grossly overstated its non-deepwater needs in the fiscal year 2001 plan or it cut deeply into these projects for the fiscal year 2002 plan to accommodate funding for the Deepwater Project. Regardless, this leaves serious questions about whether the Coast Guard is understating funding needs for non- deepwater projects to give the appearance that the Deepwater Project funding needs can be met in the next 5 years. 7. While the Coast Guard has provisions in the RFP that allow it to exit the contract if price or performance is unsatisfactory, the practical reality is that changing the systems integrator will be costly, and there is a natural reluctance for an agency to do so. Members of the peer review panel remarked similarly on this issue. In addition, complete, reliable data on total ownership costs and operational effectiveness may be absent, especially in the project’s early years, making those measures less effective as a means to evaluate contractor performance. Members of the peer review panel made this point as well. Also, the inclusion of contract incentives does not guarantee competition will exist among subcontractors. The panel did not reach unanimous consensus that such incentives would necessarily be effective in this regard, as the Coast Guard contends. 8. The Coast Guard did not comment on two recommendations that need to be addressed. Developing an effective assessment tool to evaluate the technology maturity of major equipment and components is critical to keep a tight rein on costs. Also, ensuring that future budget requests for deepwater components are for useful segments is essential. OMB strongly concurred with our view on these issues. Finally, keeping the Congress appraised of progress being made in achieving the baseline goals of minimizing costs and improving operations is vital as a basis for holding the Coast Guard accountable to the Congress and the administration for the significant investment in the project. In addition to those named above, Marie Ahearn, Neil Asaba, Naba Barkakati, Alan Belkin, Christine Bonham, Sue Burns, John Christian, Tom Collis, Ralph Dawn, Paul Francis, David Hooper, Richard Hung, Matt Lea, Sterling Leibenguth, Lynn Musser, Madhav Panwar, Colleen Phillips, David Robinson, Katherine Schinasi, Stanley Stenersen, Mike Sullivan, and William Woods made key contributions to this report.
The Coast Guard is in the final stages of planning the largest procurement project in its history-the modernization or replacement of more than 90 cutters and 200 aircraft used for missions more than 50 miles from shore. This project, called the Deepwater Capability Replacement Project, is expected to cost more than $10 billion and take 20 years or longer to complete. Congress and the Coast Guard are at a major crossroads with the project. Planning is essentially complete, and Congress will soon be asked to commit to a multibillion-dollar project that will define the way the Coast Guard performs many of its missions for decades to come. The deepwater acquisition strategy is unique and untried for a project of this magnitude. It carries many risks that could potentially cause significant schedule delays and cost increases. The project faces risks in the following four areas: (1) planning the project around annual funding levels far above what the administration has told the Coast Guard it can expect to receive, (2) keeping costs under control in the contract's later years, (3) ensuring that procedures and personnel are in place for managing and overseeing the contractor once the contract is awarded, and (4) minimizing potential problems with developing unproven technology. All of these risks can be mitigated to varying degrees, but not without management attention.
DOD operates six geographic combatant commands, each with an assigned area of responsibility (see fig. 1). Each geographic combatant commander carries out a variety of missions and activities, including humanitarian assistance and combat operations, and assigns functions to subordinate commanders. Each command is supported by a service- component command from each of the services. All of these component commands play significant roles in preparing detailed posture plans and providing the resources that the combatant commands need to execute operations in support of their missions and goals. DOD’s facilities are located in a variety of sites that vary widely in size and complexity. Some sites are large complexes containing many facilities to support military operations, housing, and other support facilities, while others can be as small as a single radar site. To develop common terminology for posture planning, DOD has identified three types of installations that reflect the large-to-small scale of DOD’s enduring overseas posture—main-operating bases, forward-operating sites, and cooperative security locations. Main-operating bases are overseas installations with relatively large numbers of permanently stationed operating forces and robust infrastructure, including family support facilities. Forward-operating sites are scaleable installations intended for rotational use by operating forces in lieu of permanently stationed forces that DOD would have to support. Because they are scaleable, they may have a large capacity that can be adapted to provide support for combat operations, and therefore DOD populations at these locations can vary greatly, depending on how they are used at any given time. Cooperative security locations are overseas installations with little or no permanent U.S. military presence, which are maintained with periodic U.S. military, contractor, or host-nation support. DOD populations at these locations can vary greatly, as they do at forward- operating sites, depending on how they are being used at any given time. A hierarchy of national and defense guidance informs the development of DOD’s global posture. The National Security Strategy, issued by the President at the beginning of each new Administration and annually thereafter, describes and discusses the worldwide interests, goals, and objectives of the United States that are vital to its national security, among other topics. The Secretary of Defense provides corresponding strategic direction in the National Defense Strategy. Furthermore, the Chairman of the Joint Chiefs of Staff provides guidance to the military through the National Military Strategy. The department has developed new guidance for global defense posture in numerous documents, principally the 2008 Guidance for Employment of the Force and the 2008 Joint Strategic Capabilities Plan. The Guidance for Employment of the Force consolidates and integrates planning guidance related to operations and other military activities, while the Joint Strategic Capabilities Plan implements the strategic policy direction provided in the Guidance for Employment of the Force and tasks combatant commanders to develop theater campaign, contingency, and posture plans that are consistent with the Guidance for Employment of the Force. The Theater Campaign Plan translates strategic objectives to facilitate the development of operational and contingency plans, while the Theater Posture Plan provides an overview of posture requirements to support those plans and identifies major ongoing and new posture initiatives, including current and planned military construction requirements. Figure 2 illustrates the relationships among these national and DOD strategic guidance documents. DOD is currently transforming its military posture in South Korea through a series of four interrelated posture initiatives, but has not estimated the total costs involved, or provided an analysis of alternatives for one initiative—tour normalization—that was initiated by the Commander, USFK that potentially could affect tens of thousands of DOD personnel and dependents and increase costs by billions of dollars. Although DOD has not fully estimated the total cost of its posture initiatives, we obtained USFK and Army estimates for each initiative, which were primarily focused on construction costs, which indicate the magnitude of costs will be significant—almost $18 billion in costs have been identified either to the Government of South Korea or to DOD through fiscal year 2020 (see table 1). The largest of these four initiatives and the primary long-term cost driver is tour normalization—extending the tour length of military service members and moving thousands of their dependents from the United States to South Korea. According to USFK officials, the decision to move forward with tour normalization was made to achieve certain USFK strategic objectives, such as to provide military commanders greater flexibility in how U.S. military forces assigned to South Korea are used and to improve the quality of life for military service members and their families. However, prior to making the decision to move forward with the tour normalization initiative, DOD did not complete a business case analysis that would evaluate the quantifiable and nonquantifiable benefits, advantages, or disadvantages of competing alternatives in order to identify the most cost-effective means to satisfy its strategic objectives. As a result, DOD is embarking on an initiative that involves moving thousands of U.S. civilians to South Korea and constructing schools, medical facilities, and other infrastructure to support them without fully understanding the costs involved or considering potential alternatives that might more efficiently achieve U.S. strategic objectives. Four major, interrelated initiatives that will affect posture are under way in South Korea. Two of these initiatives—the Yongsan Relocation Plan and the Land Partnership Plan—will consolidate U.S. military and civilian personnel from Seoul and sites north of Seoul, to a site south of Seoul. The third will establish and maintain United States military troop strength at 28,500 soldiers, and the fourth—tour normalization—will provide for 36-month accompanied tours (personnel who bring their families with them) for military personnel stationed in South Korea. In total, USFK officials have estimated the total DOD population in South Korea could increase from approximately 54,000 to 84,000 under these initiatives (see fig. 3). DOD has not estimated the full cost to implement these initiatives, but as of January 2011, DOD had identified approximately $18 billion in costs from the start of the initiative through fiscal year 2020 either to the Government of South Korea or to DOD, as described below. According to USFK and State Department officials, the United States and South Korea are currently consulting on the extent to which Special Measures Agreement funding will be applied to these initiatives. Yongsan Relocation Plan ($8.3 billion through fiscal year 2016). According to USFK officials, this is an initiative agreed to between the governments of the United States and South Korea in October 2004. The agreement involves the relocation of U.S. Army Garrison Yongsan, which contains the headquarters for U.S. 8th Army, USFK, Combined Forces Command, and the United Nations Command. This initiative will move most DOD personnel and their families—currently more than 17,000 people—from U.S. Army Garrison Yongsan, an installation located in the heart of Seoul, to U.S. Army Garrison Humphreys (Camp Humphreys), so that the land at Yongsan can be returned to South Korea. It is anticipated that South Korea will fund much of the construction costs for this initiative; USFK officials estimate that it will cost South Korea about $6.3 billion and the United States about $2 billion in construction costs through fiscal year 2016. Land Partnership Plan ($4 billion through fiscal year 2016). This realignment, agreed to between the governments of the United States and South Korea in March 2002, will move U.S. troops who are currently stationed north of Seoul farther south to Camp Humphreys, and the land they vacate is intended to be returned to South Korea. This move will involve about 7,000 to 8,000 servicemembers, primarily from the 2nd Infantry Division. The total estimated construction costs for the Land Partnership Plan are nearly $4 billion, about $3.4 billion of that to be funded by the United States. 28,500 U.S. troops ($0.245 billion through fiscal year 2016). According to the State Department, in 2008, the Presidents of the United States and South Korea agreed that U.S. troop strength would reach and be maintained at 28,500. USFK officials estimate that this initiative will cost the United States about $245 million during the 5-year period of fiscal years 2012 through 2016 ($140 million in military construction and $105 million in operation and support costs). DOD defines dwell time as the period of time between the release from involuntary active duty and the reporting date for a subsequent tour of active duty pursuant to 10 U.S.C. § 12302. Such time includes any voluntary active duty performed between two periods of involuntary active duty pursuant to 10 U.S.C. § 12302. consideration for back-to-back non-accompanied deployments. Tour normalization would reduce uncertainty for service members and their families, and affirm the United States commitment to the U.S.-Korean alliance and the region. It enables a more adaptive and flexible U.S. and combined-force posture on the Korean peninsula to strengthen the alliance’s deterrent and defense capabilities and long-term capacity for regional and global defense and security cooperation, according to the Commander’s Narrative Assessment. DOD has not finalized an implementation schedule for tour normalization as DOD continues to evaluate alternative implementation schedules and associated costs. As of September 2010, USFK officials estimated that the total DOD population in South Korea was approximately 52,800, including 11,600 dependents. One approach developed by USFK officials for implementing tour normalization called for completing the construction of facilities and movement of dependents to South Korea by 2020 except for the facilities and dependents associated with service members at Kunsan Air Base (the Air Force has yet to decide if tour normalization will be implemented at Kunsan Air Base). Under that schedule, initial steps to implement tour normalization, such as increasing the number of accompanied tours in South Korea, were expected to be completed in fiscal year 2011, when USFK officials estimated the total DOD population in South Korea would be about 54,000. Follow-on implementation steps would increase the DOD population to about 60,000 by 2016, and 76,000 by 2020, according to USFK estimates (see fig. 4). If DOD implements tour normalization at Kunsan Air Base, USFK estimated that this would occur after 2020, and the total DOD population on the South Korean peninsula could increase to about 84,000. Because DOD is still analyzing alternative tour normalization implementation schedules, the estimated costs have yet to be fully defined and have been changing. USFK officials have estimated that based on the 2020 implementation schedule, the cost to implement tour normalization for all services (including military construction, family housing, personnel, and operation and maintenance costs) would be about $5.1 billion from fiscal year 2012 through fiscal year 2020, although these estimates are very preliminary and likely to change. Additional costs estimated by USFK and the Army include: USFK estimated $1.5 billion would be needed to implement tour normalization at Kunsan AFB after fiscal year 2020. However, according to Air Force officials, this estimate only covers construction costs; the total implementation costs could be much higher. The Army calculated an extended cost estimate for tour normalization from 2021 through 2050. That estimate shows that tour normalization could increase Army operations and support costs by $15.7 billion or more from 2021 through 2050 in areas such as increased personnel and medical expenses. On October 18, 2010, the Secretary of Defense announced in a memo to the Secretaries of the Military Departments and similar officials from other DOD organizations that he had directed USFK and the military services to “proceed with full Tour Normalization for Korea, as affordable, but not according to any specific timeline.” He also directed the Army to execute the Humphreys Housing Opportunity Program for the construction of 1,400 units and to pursue Military Construction funding for additional family housing. However, the Secretary directed that no later than March 31, 2011, USFK—along with PACOM, the military services, and other relevant DOD organizations—was to provide the Secretary with a feasible and affordable plan to continue the momentum toward full tour normalization on the Korean peninsula. He directed the Cost Analysis and Program Evaluation organization to evaluate the plan and cost estimates to establish a “no less than” funding level to be identified on an annual basis. The Secretary stated he would continue to closely monitor changes in timelines, requirements, and cost as he considered how to most effectively implement the overall tour normalization plan. Although detailed cost estimates are being prepared at the direction of the Secretary of Defense as alternative implementation schedules are considered, DOD has not developed a business case analysis that would include an analysis of alternatives to support the decision to move forward with tour normalization, and did not have one planned at the time of our report. According to the GAO Cost Estimating and Assessment Guide, a business case analysis is a comparative analysis that presents facts and supporting details among competing alternatives. A business case analysis considers not only all the life cycle costs of competing alternatives, but also quantifiable and nonquantifiable benefits. This analysis should be unbiased by considering all possible alternatives and should not be developed solely for supporting a predetermined solution. Moreover, a business case analysis should be rigorous enough that independent auditors can review it and clearly understand why a particular alternative was chosen. A business case analysis seeks to find the best value solution by linking each alternative to how it satisfies a strategic objective. Each alternative should identify the relative life-cycle costs and benefits; methods and rationale for quantifying the life-cycle costs and benefits; effect and value of cost, schedule and performance tradeoffs; Sensitivity to changes in assumptions; and risk factors. On the basis of this information, the business case analysis then recommends the best alternative. Our Cost Assessment Guide also states that in addition to supporting an investment decision, the business case analysis should be considered a living document and should be updated often to reflect changes in scope, schedule, or budget. In this way, a business case analysis is a valuable tool for validating decisions to sustain or enhance the program. DOD has focused on and produced tour normalization cost estimates and continues to refine them, but has not addressed the other aspects of a business case analysis—which, according to the GAO Cost Estimating and Assessment Guide would include analyzing alternatives to tour normalization and determining the associated costs, benefits, advantages, and disadvantages of any viable alternative. For example, USFK officials stated that tour normalization was driven by the USFK Commander’s strategic objectives to (1) obtain greater flexibility in deploying U.S. forces assigned to South Korea and (2) improve military families’ quality of life by reducing the amount of time they were separated by deployments. However, DOD has not clearly demonstrated the extent to which tour normalization will actually achieve these objectives or the total costs involved relative to other alternatives. Specifically, a January 2006 joint statement of the United States and South Korea affirms that South Korea, as an ally, fully understands the rationale for the transformation of the U.S. global military strategy and respects the necessity for strategic flexibility of the U.S. forces in South Korea. U.S. Embassy officials in Seoul confirmed that there are currently no legal impediments to prevent the United States from deploying its forces, and under existing agreements, DOD has flexibility in deploying its forces to other countries or regions as necessary. However, USFK officials told us that in their view, the Government of South Korea and the general public remained reluctant to support such deployments after the United States deployed an Army brigade to Iraq in 2004 that did not return to South Korea. According to the State Department, in April 2008 the Presidents of the United States and South Korea agreed to maintain the United States force level on the peninsula at 28,500. regions. In those cases, servicemembers would be separated from their immediate family members in South Korea when they are deployed, and family members residing in South Korea would be separated from their extended family network in the United States. The financial risks of implementing tour normalization without a business case analysis to support the decision are high, given the magnitude of the resources that will be required and the impact on military construction plans. For example, most of the military dependents who would move to South Korea under this initiative would move to Camp Humphreys. At the time of our visit to that location in March 2010 the construction plan for Camp Humphreys included adding 2,328 acres to the camp, increasing the total size to 3,538 acres. The plan also included constructing more than a thousand new structures, including five new schools and an assortment of housing and other support facilities at an estimated cost of approximately $13.1 billion. This construction plan and the estimated cost combines construction of new facilities and infrastructure to accommodate military service members and dependents associated with the Yongsan Relocation Plan, Land Partnership Plan, and initial construction associated with tour normalization initiatives. At the time of our visit, significant land reclamation was already under way to support the overall transformation efforts, and new construction had started on facilities such as family housing, recreational facilities and a family style water park. (see fig. 5). Existing Camp Humphreys 1,210 acres However, the plan for Camp Humphreys at the time of our visit did not include building the necessary infrastructure to accommodate the population expected to be added if tour normalization is fully implemented. DOD officials stated that if tour normalization were to be fully implemented, Camp Humphreys would require seven additional schools—as well as an increase in other infrastructure such as housing, commissaries, and postal facilities. We were also told that the land area currently dedicated to new construction would not accommodate these additional buildings, and therefore existing building plans would have to be modified and additional land might have to be acquired. The Army Corps of Engineers official responsible for executing the building plan at Camp Humphreys stated that accommodating the total tour normalization population would call for a modified or new plan for the camp and that, with construction already under way, it would be critical to modify the plans as soon as possible, because costly modifications to building plans could result from changing facility requirements after major construction has begun. However, in our discussions with Office of the Secretary of Defense officials from the Policy and Comptroller’s office, we were told that because the construction plan for Camp Humphreys combines facility and infrastructure requirements for the Yongsan Relocation, Land Partnership, and tour normalization initiatives, they were unable to determine the extent to which tour normalization has affected construction plans at Camp Humphreys. Tour normalization will also have a major impact on posture costs and pilot training capabilities at Osan Air Base, located a few miles away from Camp Humphreys. For example, during our visit to Osan Air Base, officials told us that one of the challenges they face in implementing tour normalization is the limited amount of space available to construct the required housing, parking, child development center, commissary, six schools, and other quality-of-life facilities. At the time of our visit to Osan Air Base, base officials provided an overview of their plans to implement tour normalization, which required the demolition of approximately 20 or more existing facilities and included 51 construction projects. (All but one of these projects were planned to start in fiscal year 2012 or later.) Also, according to Air Force officials, the Air Force’s training capabilities in South Korea for its F-16 pilots are inadequate; lengthening tours to 2 or 3 years would exacerbate this training deficiency. Specifically, Air Force officials stated that their pilots do not get enough training time on South Korean training ranges because the pilots must share the ranges with South Korean pilots. In addition, South Korean ranges do not offer all of the training these pilots need. Currently, this reduced training capacity is deemed acceptable by the Air Force because pilots are reassigned after a 1-year tour and can update their training at their next duty station. However, according to Air Force officials, if 3-year tours are established for their pilots in South Korea, they may have to send the pilots on training missions to Alaska—the closest site with the required capabilities—for the training they need to maintain the necessary qualification levels. The additional costs in terms of fuel and other operating expenses for these training missions to Alaska would be an added expense. Without a business case analysis that identifies alternative courses of action and their associated life cycle costs, potential benefits, advantages, and disadvantages, DOD is embarking on an initiative that involves moving thousands of U.S. civilians to South Korea and constructing schools, medical facilities, and other infrastructure to support them without fully understanding the costs involved or considering potential alternatives that might more efficiently achieve its strategic objectives. Furthermore, blending the construction requirements for the Yongsan Relocation Plan, Land Partnership Plan, and tour normalization has obscured the extent to which construction at Camp Humphreys has been or could be affected by tour normalization decisions. As previously discussed, the Secretary of Defense requested a feasible and affordable plan to continue the momentum toward full tour normalization on the Korean peninsula; this plan could help determine the future of the initiative. However, according to USFK and OSD officials, a business case analysis has not been included as part of this decision process. DOD has embarked on a major realignment of U.S. military posture in mainland Japan, Okinawa, and Guam, but has not developed comprehensive cost estimates for these initiatives; as a result, DOD is unable to ensure that all costs are fully accounted for or determine if resources are adequate to support the program. In February 2005, the United States Secretary of State and Secretary of Defense hosted Japan’s Minister for Foreign Affairs, and its Minister of State for Defense and Director-General of the Defense Agency in a meeting of the United States- Japan Security Consultative Committee. During that meeting, the officials reached an understanding on common strategic objectives, and underscored the need to continue examinations of the roles, missions, and capabilities of Japan’s Self-Defense Forces and the U.S. Armed Forces in pursuing those objectives. They also decided to intensify their consultations on realignment of U.S. force structure in Japan. On October 29, 2005, the Security Consultative Committee released a document titled U.S.-Japan Alliance: Transformation and Realignment for the Future that, among other points, approved recommendations for realignment of U.S. military forces in Japan and related Japan Self Defense Forces, in light of their shared commitment to maintain deterrence and capabilities while reducing burdens on local communities, including those in Okinawa. Both sides recognized the importance of enhancing Japanese and U.S. public support for the security alliance. In May 2006, a United States—Japan Roadmap for Realignment Implementation was released that provided details on the approved recommendations for realignment and stated the construction and other costs for facility development in the implementation of these initiatives will be borne by the Government of Japan unless otherwise specified. The Roadmap also stated the U.S. Government will bear the operational costs that arise from implementation of these initiatives, and the two Governments will finance their realignment-associated costs consistent with their commitments to maintain deterrence and capabilities while reducing burdens on local communities. The U.S. and Japanese governments signed an agreement in February 2009 that implemented certain aspects of the Roadmap related to the relocation of the III Marine Expeditionary Force from Okinawa to Guam. As of December 2009, DOD had approximately 45,000 servicemembers stationed in Japan, with approximately 24,600 stationed in Okinawa. In addition, DOD had almost 39,800 dependents who accompanied these servicemembers—-20,250 in mainland Japan and 19,521 in Okinawa. The planned end state of the announced realignment initiatives will affect DOD posture in several areas of Japan, including servicemembers, dependents, and/or military forces located in Misawa, Yokota, Camp Zama, Yokusuka, Atsugi, Iwakuni, Kadena, and Futenma (see fig. 6). For example, DOD’s realignment initiatives, as presented in the Roadmap, would include relocating a joint U.S./Japan Air Defense Command headquarters to Yokota Air Base, relocating a carrier air wing from Atsugi to Iwakuni, consolidating several Marine Corps bases in Okinawa, and relocating Marine Corps units to Guam. These and other initiatives are discussed in greater detail below. Figure 6 also illustrates the approximate location of the epicenter of the earthquake that struck off the east coast of Japan on March 11, 2011. The effect of this and the ensuing tsunami and nuclear reactor incidents on DOD posture realignment initiatives is not yet known. Although DOD and the Government of Japan have embarked on these initiatives, DOD has not estimated the total costs associated with them. However, USFJ officials were able to provide us with details from an October 2006 Government of Japan budget estimate study for realignment costs covering Japan’s fiscal years 2007 through 2014. According to USFJ officials, the Government of Japan has not provided any updates to these costs, so they are the best estimates of Government of Japan costs available at this time. We also obtained limited cost information associated with initiatives in Guam and the Northern Mariana Islands that was developed by the Marine Forces, Pacific Command. Taken together, the available cost information we gathered indicates that posture initiative costs will be significant—we identified approximately $29.1 billion— primarily construction costs for these initiatives (see table 2). According to USFJ and OSD officials, DOD is now in the process of developing cost estimates for these initiatives. These costs may include, among other items, the cost to outfit, furnish, and maintain buildings constructed by Japan and to move personnel and equipment into consolidated locations. Carrier air wing move from Atsugi to Iwakuni ($1.4 billion— Japan budget estimate only). As outlined in the U.S.-Japan Roadmap for Realignment Implementation (the Roadmap), Carrier Air Wing 5, a Navy air wing paired with the aircraft carrier USS George Washington (currently stationed at Fleet Activities Yokosuka, Japan), would move its headquarters and fixed wing flight operations from Naval Air Facility Atsugi to Marine Corps Air Station Iwakuni. In 2006, Japan estimated that it would spend approximately $1.4 billion to construct new facilities under this initiative, but DOD has not estimated its own costs. Under this initiative, the fixed-wing aircraft attached to Carrier Air Wing 5 would move to Iwakuni, but according to Navy officials, the rotary wing squadrons would stay at Atsugi. In addition, Marine Corps rotary wing aircraft currently located at Iwakuni would eventually relocate to Guam as part of the Marine Corps relocation from Okinawa to Guam described below. Camp Zama/ Sagama Depot ($0.3 billion—Japan budget estimate only). The intent of this initiative is to improve command and control capabilities between the U.S. Army and the Japanese Ground Self Defense Force by transforming the Army’s headquarters at Camp Zama, establishing the headquarters of the Japanese Ground Self Defense Force Central Readiness Force there, and giving Japanese helicopters access to the Army’s Kastner Army Airfield at Camp Zama. In addition, a battle command training center and other support facilities are to be constructed at Sagami General Depot. The United States would also return portions of both Camp Zama and Sagami General Depot to Japan for local redevelopment. According to USFJ officials, in 2006, Japan estimated it would spend approximately $300 million to construct new facilities under this initiative, but DOD has not estimated its own costs. Aviation Training Relocation ($0.3 billion—Japan budget estimate only). In order to reduce the impact of noise on communities surrounding U.S. air facilities at Kadena Air Base, Naval Air Facility Misawa, and Marine Corps Air Station Iwakuni and to enhance bilateral training with the Japanese, aviation training would be relocated to six Japanese Air Self Defense Force facilities. Both the United States and Japan would work toward expanding the use of Japanese Air Self Defense Force facilities for bilateral training and exercises in the future. In 2006, Japan estimated it would spend approximately $300 million to construct new facilities for this initiative, but DOD has not estimated its own costs. Yokota Air Base and Air Space (No cost estimate provided). The Japan Air Self Defense Force Air Defense Command and relevant units would relocate to Yokota Air Base and a bilateral master plan would be developed to accommodate facility and infrastructure requirements. A bilateral, joint operations coordination center, established at Yokota Air Base, would include a collocated air and missile defense coordination function. Measures would be pursued to facilitate the movement of civilian aircraft through the Yokota airspace while satisfying military operational requirements. Okinawa consolidation ($4.2 billion—Japan budget estimate only). Following the relocation of Marines to the Futenma Replacement Facility, the return of Marine Corps Air Station Futenma to the Japanese, and the transfer of III Marine Expeditionary Forces personnel to Guam, four additional U.S. facilities and part of a fifth facility in southern Okinawa would be vacated (see fig. 7). The Marines in these locations plan to move to four primary locations in the northern, less crowded part of Okinawa. In 2006, Japan estimated it would spend approximately $4.2 billion to construct projects under this initiative, but DOD has not estimated its own costs. Futenma Replacement Facility ($3.6 billion—Japan budget estimate only). A new runway and surrounding infrastructure for the Marine Corps are to be built at Camp Schwab to replace Marine Corps Air Station Futenma; this new facility is known as the Futenma Replacement Facility. DOD plans to relocate a Marine Aviation Group, Logistics Squadron, and several helicopter squadrons to the Futenma Replacement Facility once it is complete. Although plans for the new air base have not been finalized, one option includes the construction of two runways aligned in a V shape that would extend into the Oura and Henoko Bays, while another option would require a single runway. Both options would require significant reclamation of the sea to complete. Figure 8 below shows some of the current facilities at Camp Schwab and the estimated level of landfill that would be required to construct the runway(s). The Marine Corps relocation to the Futenma Replacement Facility at Camp Schwab is planned to occur when the facility is fully operationally capable. In 2006, Japan estimated it would spend approximately $3.6 billion for this initiative, but DOD has not estimated what its costs will be. Marine Corps Relocation from Okinawa to Guam ($17.4 billion— Japan budget estimate and DOD estimated costs). As part of the military posture realignment on Okinawa, about 8,600 Marines and their 9,000 dependents are to transfer from several locations in Okinawa to Guam. It is expected that the 8,600 marines who relocate to Guam will include the III Marine Expeditionary Force Command Element, the 3rd Marine Division Headquarters and 3rd Marine Logistics Group Headquarters, the 1st Marine Air Wing Headquarters, and the 12th Marine Regiment Headquarters. The governments of Japan and the United States have agreed to share the costs of transferring the Marines from Okinawa to Guam, with the Government of Japan anticipated to provide about $6.1 billion and the United States anticipated to provide an additional $4.2 billion (in U.S. fiscal year 2008 dollars) for construction of new facilities and infrastructure development on Guam. In addition, the Marine Corps estimates that an additional $7.1 billion may be required to complete the move to Guam-—$4.7 billion for additional construction costs and $2.4 for costs associated with utilities, labor, and procurement of military equipment. However, these Marine Corps estimates have not been validated by DOD. This transfer of Marine Corps personnel and families is part of a larger DOD effort to increase the military posture on Guam, including Air Force initiatives to add intelligence, surveillance, and reconnaissance capabilities; Navy initiatives related to new pier construction and a new hospital; and an Army initiative related to installation of an air and missile defense system. Figure 9 illustrates the locations where these initiatives will be implemented on the island. If implemented as planned, these initiatives will increase the U.S. military presence on Guam from about 15,000 in 2009 to more than 39,000 by 2020, which will increase the current population of the island by about 14 percent over those years. We have issued a series of reports discussing various aspects of the military buildup on Guam and the costs and challenges DOD will face in accomplishing those initiatives, including obtaining adequate funding and meeting operational needs, such as mobility support and training capabilities. For example, we have reported DOD cost estimates for the military buildup in Guam do not include the estimated costs of all other defense organizations that will be needed to support the additional military personnel and dependents who will relocate to Guam. Expanding training capabilities in the Northern Mariana Islands ($1.9 billion). According to Marine Corps officials, independent of the progress made on the initiatives in Japan and Guam, the Marine Corps will proceed with constructing new training areas in the Pacific. Some training areas are expected to be constructed on Guam for the Marines. However, the environmental impact statement (EIS) for the Marine Corps’ move to Guam found that Guam cannot accommodate all training for the realigned Marine Corps forces. DOD has identified the nearby island of Tinian (100 miles away) and other islands in the Northern Mariana Islands as locations that could provide additional land for training. Marine Corps officials estimate that building the training range in the Northern Mariana Islands could cost approximately $1.9 billion or more. Of that amount, Marine Corps Pacific officials identified $1 billion in funding requirements from fiscal years 2012 through 2015 to cover costs such as military construction, planning and development, environmental compliance, and combat arms training ranges. The remaining cost for full development of the training capabilities and capacity in the Northern Mariana Islands was at least $900 million over an unspecified period of time, according to the Marine Corps officials. According to DOD officials, comprehensive cost estimates for posture initiatives in Japan, including all costs that will be incurred by the United States, have not been completed because there are many uncertainties surrounding initiative implementation schedules. According to Marine Corps officials and confirmed by USFJ officials, when the Government of Japan is constructing any facility for the United States, it does not outline specific timetables; therefore, knowing when a Government of Japan-led construction project will begin or end is difficult to determine and can affect DOD’s ability to estimate future costs. This is important because the United States-Japan Roadmap for Realignment Implementation, dated May 1, 2006, indicates that the Government of Japan will generally bear the construction and other costs for facility development under these initiatives, and the United States will bear the operational costs. In January 2011, USFJ officials indicated that the service component commands were in the process of developing some initiative cost estimates, but their efforts were not complete and no additional information was provided on the status of these efforts or expected results. In the United States Department of Defense Fiscal Year 2011 Budget Request Overview, prepared by the Office of the Under Secretary of Defense (Comptroller), DOD outlined the need to change how the department buys its weapons and other important systems and investments. According to DOD, one way to reform how the department invests is to strengthen front-end scrutiny of costs and not rely on overly optimistic or underestimated costs from the beginning of the investment. In addition, according to the GAO Cost Estimating and Assessment Guide, one method for capturing all cost elements that pertain to a program from the initial concept through its operations, support, and eventual end, is through a life-cycle cost estimate. A life cycle cost estimate encompasses all past, present, and future costs for every aspect of the program, regardless of funding source. A life-cycle cost estimate usually becomes the program’s budget baseline because the estimate ensures that all costs are fully accounted for, determines when a program is supposed to move from one phase to another, and establishes if resources are adequate to support the program. Seeking more visibility into DOD posture initiative costs and funding requirements, the Senate Appropriations Committee recently directed DOD to provide comprehensive and routine updates on the status of posture-restructuring initiatives in South Korea, Japan, Guam, and the Northern Mariana Islands (see app. II). The updates should be provided annually, beginning with the submission of the fiscal year 2012 budget request, until the restructuring initiatives are complete or funding requirements to support them are satisfied. The updates should address such things as schedule status, facilities requirements, and total costs— including operations and maintenance. If fully responsive to the committee’s reporting direction, DOD status updates should provide needed transparency and visibility into the near- and long-term costs and funding requirements associated with the transformation initiatives. As discussed in our recent report on military posture in Europe, DOD guidance does not require combatant commanders to include comprehensive information on posture costs in their theater posture plan, and as a result, DOD lacks critical information that could be used by decision makers and congressional committees as they deliberate new posture requirements and the associated allocation of resources. The 2008 Joint Strategic Capabilities Plan requires that each combatant command provide, in its theater posture plan, information on the inventory of installations in the combatant commander’s area of responsibility, to include estimates of the funding required for proposed military construction projects. However, this guidance does not specifically require—and therefore PACOM does not report—the total cost to operate and maintain DOD’s posture in Asia whether those costs are associated with a posture initiative or not. Our analysis shows that operation and maintenance costs are significant. Of the approximately $24.6 billion obligated by the services to support DOD’s posture in Asia from fiscal years 2006 through 2010, approximately $18.7 billion (76 percent) was for operation and maintenance costs. The military services project that operation and maintenance funding requirements will continue at about $2.9 billion annually for fiscal years 2011-2015. However, as previously discussed, DOD has major posture transformation initiatives underway in South Korea, Japan, and Guam that could significantly impact estimates of these future costs. For example, according to USFJ and Marine Corps officials, although the Government of Japan has agreed to construct new facilities as part of the realignment of U.S. military forces in Japan, DOD is responsible for the costs to furnish, equip, and maintain those facilities to make them usable, and for operation and support costs, but DOD has not yet estimated those costs. According to USFJ officials, in Okinawa alone, Japan would build approximately 321 new buildings and 573 housing units, all of which will need to be furnished and equipped by DOD. Our prior work has demonstrated that comprehensive cost information—including accurate cost estimates—is key to enabling decision makers to make funding decisions, develop annual budget requests, and evaluate resource requirements at key decision points. As we previously reported, the 2008 Joint Strategic Capabilities Plan requires that theater posture plans prepared by each combatant command provide information on each installation in a combatant commander’s area of responsibility, to include identifying the service responsible for each installation, the number of military personnel at the installation, and estimates of the funding required for military construction projects. In accordance with these reporting requirements, PACOM’s 2010 theater posture plan provides personnel numbers, service responsibilities, specified posture initiatives, and associated military construction costs for installations within PACOM’s area of responsibility. However, the Joint Strategic Capabilities Plan does not specifically require the combatant commands to report estimates for other types of costs, such as costs associated with the operation and maintenance of DOD installations, in their theater posture plans. DOD’s operation and maintenance funding provides for a large number of expenses. For example, with respect to DOD installations, operations and maintenance funding provides for base operation support and sustainment, restoration, and modernization of DOD’s buildings and infrastructure, funding that—among other purposes—is to keep facilities and grounds in good working order. Because the Joint Strategic Capabilities Plan does not require operations and maintenance costs to be reported, they were not included in PACOM’s 2010 theater posture plan. To obtain a more comprehensive estimate of the cost of defense posture in the Pacific, we gathered, from each military service, obligations data related to military construction, family housing, and operation and maintenance appropriations for installations in the PACOM area of responsibility. We found that military construction and family housing obligations accounted for almost one-quarter of the services’ total obligations against those appropriations from fiscal years 2006 through 2010. In total, from 2006 through 2010, the military services obligated about $24.6 billion to build, operate, and maintain installations in Asia, of which about $5.9 billion (24 percent) was for military construction and family housing, and $18.7 billion (76 percent) was for operation and maintenance of these installations (for a more detailed breakdown of costs at installations in Asia see app. III). On average, the services reported they obligated almost $5 billion annually for installations in PACOM’s area of responsibility, with $3.7 billion obligated for operations and maintenance (see fig. 10). Data provided by the military services projects that they will require approximately $5.2 billion per year through 2015, of which $2.3 billion (45 percent) will be for military construction and family housing and $2.9 billion per year (55 percent) will be for installation operations and maintenance costs. However, the operations and maintenance costs may be significantly understated since the military services historically obligated approximately $3.7 billion annually from 2006 through 2010 for installation operation and maintenance costs, as discussed above, and the major transformation initiatives under way in South Korea, Japan, and Guam may significantly increase costs over the long term, potentially through 2015 and beyond, as illustrated by the following examples. Potential for Cost Growth in South Korea: To provide housing for thousands of dependents that DOD wants to move to South Korea under tour normalization, DOD has established the Humphreys Housing Opportunity Program, whereby, according to USFK officials, private developers would build housing for DOD families and then recover their investments through the rents that military families pay using DOD overseas housing allowance funds. (Current estimates indicate this monthly allowance would be about $4,200/month for service members at Camp Humphreys.) Although using the Humphreys Housing Opportunity Program has the potential to lower or even eliminate construction-funding requirements, it would increase the Army housing-allowance costs. One Army estimate indicates fully implementing tour normalization could increase education and medical costs by almost $10 billion from 2012 through 2050. According to USFK and State Department officials, the United States and Korea are currently consulting on the extent to which Special Measures Agreement contributions (funds provided and expenditures borne by the Government of South Korea to help defray the costs of the U.S. military presence in South Korea) will be used to pay for some military construction costs. Based on historical information and the current Special Measures Agreement through 2013, South Korea has provided or agreed to provide the United States on average ₩786 billion per year from fiscal years 2007 through 2013, which is equivalent to approximately $698 million U.S. dollars. While using these contributions to pay for construction costs can lower DOD’s construction funding requirements, it also eliminates the opportunity DOD has to apply those funds to reduce operation and maintenance costs and related appropriations, thus increasing the required funding in these appropriations. Potential for Cost Growth in Japan: The Government of Japan has historically been a major financial contributor, in the form of host-nation support funding, to help defray DOD posture costs. However, after peaking in 1999 (¥276 billion), funding from Japan has steadily declined. In 2010, the Government of Japan provided ¥187 billion in host-nation support—the lowest total since 1992. One element of host-nation support, the Japanese Facilities Improvement Program—which, as of April 2010, has provided over $22 billion worth of construction for U.S. military facilities—has declined nearly 80 percent since 1993, as illustrated in figure 11. According to an official in the Office of the Secretary of Defense, in January 2011, the governments of Japan and the United States agreed to maintain the 2010 levels of host-nation support for the next 5 years. Any increases in DOD’s operation and support costs would therefore be borne by DOD. As previously discussed, DOD has not estimated the total costs to the United States associated with the posture initiatives in Japan, which could be significant. According to USFJ and Marine Corps officials, although the Government of Japan has agreed to construct new facilities as part of the realignment of U.S. military forces in Japan, DOD is responsible for the costs to furnish and equip those facilities to make them usable, and DOD has not yet estimated those costs. Due to the number of buildings involved, these costs could be significant— USFJ officials have estimated that Japan would build approximately 321 new buildings and 573 housing units in Okinawa, all of which will need to be furnished and equipped by the U.S. government. While it is difficult to determine at this time what, if any, impact the March 11, 2011, earthquake, tsunami, and associated nuclear reactor incident will have on current agreements and initiative construction plans, DOD officials have said that there is potential for increases in the cost of materials and labor in Asia. They said that it could be similar to the impact that was experienced in the United States after Hurricane Katrina. As we reported, at that time, service officials at various installations expressed concern about the potential for increases in construction costs because of ongoing reconstruction due to damage caused by Hurricane Katrina, coupled with the large volume of anticipated Base Realignment and Closure construction. Potential for Cost Growth in Guam: In the introduction to the 2009 Agreement, the United States and Japan have reaffirmed their intention to spend just over $10 billion together to provide facilities and infrastructure on Guam to accommodate the Marine Corps relocation by 2014. However, as previously discussed, Marine Corps officials estimate it will cost an additional $4.7 billion for military construction and $2.4 billion for operation and maintenance, procurement, and collateral equipment to complete the relocation. These Marine Corps cost estimates have not been reviewed or validated within DOD and are therefore subject to change. If implemented as planned, military posture initiatives will increase the U.S. military presence on Guam from about 15,000 in 2009 to more than 39,000 by 2020, a presence that will increase the current island population by about 14 percent over those years. Operation and maintenance costs will increase as the DOD population grows. According to the GAO Cost Estimating and Assessment Guide, affordability is the degree to which a program’s funding requirements fit within the agency’s overall portfolio plan. Making a determination about whether a program is affordable depends a great deal on the quality of its cost estimate. Our prior work has demonstrated that comprehensive cost information is a key component in enabling decision makers to set funding priorities, develop annual budget requests, and evaluate resource requirements at key decision points. We have developed a cost estimation process that, when followed, should result in reliable and valid cost estimates that management can use to make informed decisions about whether a program is affordable within the portfolio plan. Furthermore, guidance from the Office of Management and Budget has highlighted the importance of developing accurate cost estimates for all agencies, including DOD. In addition, our Cost Estimating and Assessment Guide highlights the importance of considering the collective resources needed by all programs designed to support an agency’s goals. The benefit of considering the collective program requirements gives decision makers a high level analysis of their portfolio and the resources they will need in the future. Whether these funds will be available will determine what programs remain in the agency’s portfolio. Because programs must compete against one another for limited funds, it is considered a best practice to perform this affordability assessment at the agency level, not program by program. In the case of PACOM-posture costs, affordability analysis therefore requires an accurate cost estimate of the total cost to sustain existing posture—such as the cost to sustain existing DOD infrastructure and facilities in Hawaii and other locations currently in place in the Pacific—to serve as a foundation for deliberating the cost and affordability of new posture initiatives. While approaches may vary, an affordability assessment should address requirements at least through the programming period and, preferably, several years beyond. To improve DOD’s reporting on global posture costs, we recommended, in February 2011, that the Secretary of Defense direct the Chairman, Joint Chiefs of Staff, revise the Joint Strategic Capabilities Plan to require that theater posture plans include the cost of operating and maintaining existing installations and estimate the costs associated with initiatives that would alter future posture. DOD agreed with this recommendation and recognized that the costs associated with operating and maintaining overseas facilities are an important consideration in the posture decision- making process, but DOD’s proposed corrective actions did not fully address the intent of our recommendation. Specifically, the department did not state that it would further modify the Joint Strategic Capabilities Plan to require that the theater posture plans include the cost of operating and maintaining existing installations outside of costs associated with posture initiatives. DOD stated that there are limits to combatant commands’ abilities to include operation and maintenance information in theater posture plans, as those costs are inherently a service function. DOD stated that, when operation and maintenance costs are known, combatant commanders should include them in their theater posture plans. When these costs are unknown—but required for oversight and decision making—the department would require the services to provide appropriate cost detail. DOD’s proposed corrective actions would therefore not require the combatant commanders to routinely collect and consider operations and maintenance costs at existing installations (costs that recently have been about $3.7 billion annually in the Pacific) unrelated to posture initiatives as theater posture plans are developed. Furthermore, the department’s proposed action to include operations and maintenance costs in the theater posture plans only when they are known and to require the services to provide additional data only when it is needed for decision making could result in DOD decision makers receiving fragmented posture cost information on an ad-hoc basis. Without a comprehensive estimate of the total cost of posture—including existing facilities and infrastructure that will not be affected by any new posture initiatives—and routine reporting of those costs, DOD decision makers and congressional committees will not have the full fiscal context they need to develop and consider DOD’s funding requests for future posture initiatives. Absent further modification to the Joint Strategic Capabilities Plan to require the theater posture plans to include the cost of operating and maintaining existing installations, DOD decision makers are left with the option to require the Services to provide this data. DOD posture in Asia provides important operational capabilities and demonstrates a strong commitment to our allies—critical aspects of our national defense. However, in an era of significant budgetary pressures and competition for resources, comprehensive cost information and alternative courses of action must be routinely considered as posture requirements are developed. To ensure the most cost effective approach is pursued, major initiatives, such as tour normalization in South Korea, require not only comprehensive cost estimates but a thorough examination of the potential benefits, advantages, disadvantages, and affordability of viable alternatives before a course of action is selected. However, despite not having an approved business case that supports the decision to move forward with tour normalization and the presence of outstanding questions about the cost and schedule to implement the initiative, DOD is constructing facilities and infrastructure at Camp Humphreys in a manner that combines requirements for multiple initiatives, an approach that makes it difficult to identify what funds or construction activities are at risk if a more cost-effective alternative to tour normalization is identified. Furthermore, across the Pacific region, DOD has embarked on complex initiatives to transform U.S. military posture, and these initiatives involve major construction programs and the movement of tens of thousands of DOD civilian and military personnel, and dependents—at an undetermined total cost to the United States and host nations. Although we have identified potential costs that range as high as $46.7 billion through 2020, and $63.9 billion through 2050, these estimates are volatile and not comprehensive. Furthermore, congressional committees have been presented with individual posture decisions and funding requests that are associated with specific construction programs or initiatives, but those requests lack comprehensive cost estimates and the financial context that such estimates would provide—including long- term costs to complete and annual operation and maintenance costs. Without that context, DOD is presenting Congress with near-term funding requests that will result in significant long-term financial requirements whose extent is unknown. To provide DOD and Congress with comprehensive posture cost information that can be used to fully evaluate investment requirements and the affordability of posture initiatives, we recommend that the Secretary of Defense take the following seven actions: Identify and direct appropriate organizations within the Department of Defense to complete a business case analysis for the strategic objectives that have to this point driven the decision to implement tour normalization in South Korea. This business case analysis should clearly articulate the strategic objectives, identify and evaluate alternative courses of action to achieve those objectives, and recommend the best alternative. For each alternative course of action considered, the business case analysis should address, at a minimum:   methods and rationale for quantifying the life-cycle costs and relative life-cycle costs and benefits;    potential advantages and disadvantages associated with the benefits; effect and value of cost and schedule trade-offs; sensitivity to changes in assumptions; alternative; and risk factors. Set specific time frames for the completion of the business case analysis, the Secretary of Defense’s review, and the approval of the selected alternative. Through the Chairman of the Joint Chiefs of Staff, direct the Commander, United States Forces Korea, to provide a detailed accounting of the funds currently being applied and requested to construct new facilities at Camp Humphreys, identify construction projects that will be affected-—directly or indirectly—by a decision to fully implement tour normalization, and provide that information to the Office of the Secretary of Defense with sufficient time to limit investments associated with tour normalization as recommended below. Identify and limit investments and other financial risks associated with construction programs at Camp Humphreys—funded either by direct appropriations or through alternative financing methods such as the Humphreys Housing Opportunity Program—that are affected by decisions related to tour normalization until a business case analysis for the strategic objectives that have to this point driven the decision to implement tour normalization in South Korea, is reviewed and the most cost-effective approach is approved by the Secretary of Defense. Direct the Secretaries of the military departments to take the following three actions with respect to annual cost estimates:  Develop annual cost estimates for DOD posture in the U.S. Pacific Command area of responsibility that provide a comprehensive assessment of posture costs, including costs associated with operating and maintaining existing posture as well as costs associated with posture initiatives, in accordance with guidance developed by the Under Secretary of Defense (Comptroller).  Provide these cost estimates to the Combatant Commander in a time frame to support development of the annual theater posture plan.  Provide these cost estimates to the Offices of the Under Secretary of Defense (Comptroller) and the Under Secretary of Defense (Policy) to support DOD-wide posture deliberations, affordability analyses, and reporting to Congress. In written comments on a draft of this report, DOD fully agreed with six of our recommendations, partially agreed with one recommendation, and stated it would work with DOD components to implement the recommendations. However, DOD did not indicate the specific steps or time frames in which corrective actions would be taken. Specifics regarding DOD’s corrective actions and time frames for completion are important to facilitate Congressional oversight, and can provide reasonable assurance that DOD will take all appropriate measures to mitigate financial risks and better define future requirements. DOD agreed with our three recommendations to complete a business case analysis for the strategic objectives that have, to this point, driven the decision to implement tour normalization in South Korea; set specific time frames for the completion of the business case analysis; and account for the funds currently being applied and requested to construct new facilities at Camp Humphreys. In its response, DOD acknowledged that while USFK has completed numerous analyses concerning tour normalization, DOD agrees that there is value in conducting a business case analysis that assesses alternatives to strategic objectives. However, DOD provided no specifics on the steps or time frames it would follow to implement these corrective actions. DOD also agreed with our recommendations to develop annual cost estimates for DOD posture in the U.S. Pacific Command area of responsibility; provide these cost estimates to the Combatant Commander in a time frame to support development of the annual theater posture plan; and to provide these cost estimates to the Offices of the Under Secretary of Defense (Comptroller) and the Under Secretary of Defense (Policy) to support DOD-wide posture deliberations, affordability analyses and reporting to Congress. However, DOD provided no specifics on the steps or time frames it would follow to implement these corrective actions. DOD partially agreed with our recommendation to identify and limit investments and other financial risks associated with construction programs at Camp Humphreys—funded either by direct appropriations or through alternative financing methods such as the Humphreys Housing Opportunity Program—that are affected by decisions related to tour normalization until a business case analysis for the strategic objectives is reviewed, and the most cost-effective approach is approved by the Secretary of Defense. DOD stated it will identify and consider limiting the investments and other financial risks, while examining the implications (diplomatic, fiscal) of such decisions. While we agree it is prudent to examine the implications of decisions to limit investments and financial risks, DOD provided no specifics on the steps or time frames it would follow to implement this corrective action. Without specific implementation time frames for a business case analysis that are synchronized with planned investment decisions, DOD may not be in a position to effectively limit actions and investments to expand housing at Camp Humphreys planned for this fiscal year if the business case analysis proves those investments to be inappropriate. We also provided the Department of State with a draft of this report for official comment, but it declined to comment since the report contains no recommendations for the State Department. DOD and State provided technical comments separately that were incorporated into the report as appropriate. DOD’s written comments are reprinted in appendix IV. We are sending copies of this report to appropriate congressional committees, the Secretary of Defense, and appropriate DOD organizations. In addition, this report will be available at no charge on our Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Affairs and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To determine the magnitude of cost associated with the major global defense posture initiatives ongoing and planned on the Korean peninsula and the process by which the decision was made to move forward with the largest of these initiatives-—tour normalization—we interviewed and collected data from officials in the Office of the Under Secretary of Defense (Policy), the Under Secretary of Defense (Comptroller), the Deputy Under Secretary of Defense (Installations and Environment), and the Joint Staff; the Department of the Army, and the Department of the Air Force; PACOM and the Army, Navy, Marine Corps, and Air Force component commands; and United States Forces Korea and its Army and Air Force service components. We conducted interviews and collected data from officials at the U.S. Army Garrison Yongsan, U.S. Army Garrison Humphreys, and Osan Air Base. We also met with U.S. officials at the U.S. Embassy in Seoul, South Korea. We collected planning and cost information at military service headquarters, PACOM, United States Forces Korea, and United States Forces Korea’s Army and Air Force service components. For initiatives in Korea, USFK officials provided high- level cost estimates, which included assumptions related to the use of host-nation support funding and host-nation costs, which in some cases were constantly changing or not yet approved. Army headquarters officials provided us with detailed estimates of tour normalization costs extended to 2050, and stated those estimates were the official position of the Department of the Army on tour normalization costs. We compiled this initiative information, including available cost information and assumptions related to host-nation funding, in order to identify the magnitude of DOD’s initiatives and their potential costs. We converted host-nation funding to U.S. dollars using exchange rates published in the 2011 Economic Report of the President. We discussed the cost information we received with officials in USFK and Office of the Secretary of Defense (Comptroller) and determined that although the information was incomplete, it was sufficiently reliable to provide an order-of-magnitude estimate of the potential cost of each initiative and therefore was adequate for the purposes of our review, subject to the limitations discussed in this report. Once we consolidated initiative description and cost information, we provided our summaries back to the cognizant DOD offices to ensure we had appropriately interpreted the data they provided. To determine whether tour normalization was supported by a business case analysis, we interviewed and collected data from the Office of the Under Secretary of Defense (Policy), the Department of the Army, and United States Forces Korea officials. Additionally, we collected and analyzed documentation, including the current and previous versions of the Quadrennial Defense Review, OSD policy documents related to tour normalization, and strategic documentation referencing the decision to move forward with tour normalization. We then compared DOD’s approach to criteria established in the GAO Cost Estimating and Assessment Guide. To determine the magnitude of cost associated with the major global defense posture initiatives ongoing and planned in Japan, Guam, and the Northern Mariana Islands, we interviewed and collected data from officials in the Office of the Under Secretary of Defense (Policy), the Office of the Under Secretary of Defense (Comptroller), the Office of the Deputy Under Secretary of Defense (Installations and Environment), and the Joint Staff; the Department of the Army, the Department of the Navy, and the Department of the Air Force; PACOM and the Army, Navy, Marine Corps, and Air Force component commands; United States Forces Japan and its military service components, including Marine Corps Bases Japan; and the Joint Guam Program Office. We conducted interviews and collected data from officials at Yokota Air Base, Camp Zama, and Fleet Activities Yokosuka, and on Okinawa at Camps Schwab, Butler, and Courtney, and Marine Corps Air Station Futenma. We also met with U.S. officials at the U.S. Embassy in Tokyo, Japan, and the U.S. Consulate in Naha, Okinawa. At all appropriate offices included in our review, including Office of the Secretary of Defense, PACOM and its service component commands, USFJ and its component commands and at specific military facilities visited, we requested comprehensive DOD cost estimates for each posture initiative and were told that comprehensive cost estimates for each initiative did not exist. As a result, we collected planning, any cost information that was available, and initiative status information. For initiatives in Japan, DOD officials provided information based on budget estimates prepared by the Government of Japan, but provided only limited estimates of costs to the United States. We discussed this cost information with officials at USFJ and the Office of the Secretary of Defense (Comptroller) and determined that although the information was incomplete, it was sufficiently reliable to provide an order-of-magnitude estimate of the potential cost of each initiative, and therefore was adequate for the purposes of our review, subject to the limitations discussed in this report. We compiled the data, including cost information, from all locations in order to assemble a full description of the initiatives and any identified cost. We analyzed and compared the cost information received with criteria established in the GAO Cost Estimating and Assessment Guide. Additionally, to provide us with more comprehensive information on the military buildup on Guam, we interviewed and collected data from the Joint Guam Program Office and used information developed through other related GAO work. To determine the extent to which DOD develops comprehensive estimates of the total cost of defense posture in Asia to inform the decision-making process, we interviewed and collected data from officials in the Office of the Under Secretary of Defense (Policy), the Under Secretary of Defense (Comptroller), the Deputy Under Secretary of Defense (Installations and Environment), and the Joint Staff; the Department of the Army, the Department of the Navy, and the Department of the Air Force; PACOM and its Army, Navy, Marine Corps, and Air Force component commands; United States Forces Japan and its military service components; United States Forces Korea and its Army and Air Force service components; and the Joint Guam Program Office. We also reviewed the 2009 and 2010 DOD Global Defense Posture Reports to Congress, including the sections addressing posture costs, and sections of the 2010 PACOM Theater Posture Plans. We also reviewed budget documentation, including the military construction appropriations component of the President’s Budget request for fiscal years 2010 and 2011. Furthermore, we issued data requests asking for actual obligations and projected requirements data on military construction, family housing, and operations and maintenance appropriations related to installations as part of DOD’s defense posture in Asia for fiscal years 2006 through 2015. We obtained data from the Departments of the Army, Navy, and Air Force and their PACOM service component commands, including the Marine Corps. After we received the data and consolidated them by military service, we sent this information back to the services that had provided them to ensure we had appropriately interpreted the data they had provided. After receiving validated data from all of the services, we aggregated and analyzed it. To assess the reliability of the cost data received during this data call, we reviewed data system documentation and obtained written responses to questions regarding the internal controls on the systems. To ensure the accuracy of our analysis, we used Statistical Analysis Software (SAS) when analyzing the data and had the programming code used to complete those analyses verified for logic and accuracy by an independent reviewer. Furthermore, we reviewed previous GAO reporting on overseas basing, military construction, the uses of cost information when making decisions about programs, and guidance on cost estimating and the basic characteristics of credible cost estimates. Given the various steps discussed above to assess the quality of the cost data, cost estimates, and other data used, we determined the data were sufficiently reliable for purposes of this report. We conducted this performance audit from November 2009 through April 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Committee Reporting Direction Contained in Senate Report 111-226 (S. Rep. No. 111-226, at 13-15 (2010)) In order to provide Congress with comprehensive and routine information on the status of these major DOD posture initiatives in a manner that can be used to provide the appropriate context for budget deliberations and oversight, the Committee directs the Department to provide detailed annual updates on the status of posture restructuring initiatives in Korea, Japan, Guam, and the initiative that will address training capabilities and capacity in the Pacific region as an appendix to the annual DOD Global Posture Report. These initiative status updates should be provided annually, beginning with the submission of the fiscal year 2012 budget request, until the restructuring initiatives are complete and/or funding requirements to support them are satisfied. The initiative status updates should address the following areas: Initiative Description—an overall description of each initiative, the major components of the initiative, the relationships between each component and the overall successful completion of the initiative, a program baseline that provides an estimated total cost of the initiative, expected completion date, and the basis for pursuing the initiative that is clearly linked to specific DOD strategic goals and objectives defined by the Secretary of Defense, Military Departments, Combatant Commander, or Service Component Commands. DOD organization responsible for managing and executing the initiative. Schedule Status—a comparison of the current estimated timeframe to complete the overall initiative and major components of the initiative with original baseline estimates and the currently approved schedule. An explanation of changes in the estimated completion date or changes in the approved schedule should be provided. Facilities Requirements—a comparison of the baseline and current projected number of facilities required to provide appropriate work space, housing, and support services to the population DOD anticipates it will be supporting, including facilities, family housing, commissaries/post exchanges, schools, child care, clinics and hospitals, and any other facility that will be needed to support the military, civilian employee, local national employees, contractor, and retiree population.  Cost Summary—a comparison of the baseline, approved program, and current estimated costs by appropriation; expressed in base year and then-year dollars, addressing all costs associated with establishing, modifying, and sustaining DOD’s posture under this initiative, including costs such as the housing allowance provided to military service members and families that are then paid to external organizations for housing.  Funding Summary—a listing of the funding profile, by appropriation, for the initiative, based on the current year President’s Budget detailing prior years, current year, future years defense program, and costs to complete; expressed in then-year dollars. All funding requirements associated with the initiative should be addressed, including, but not limited to military construction, operations and support, and personnel appropriations Initiative Estimate Assumptions—the key assumptions that drive initiative cost and schedule estimates, including:  Population, including the number of military, civilian, non- DOD personnel, command sponsored families and dependants, non- command sponsored families and dependants, and military retirees affected by the initiative.  Housing, including the use of public/private partnerships to provide necessary facilities, percentage of personnel and dependents expected to reside in base housing and off the base or installation, availability of host-nation land for construction of facilities, and the anticipated host-nation funded and/or provided housing construction.  Cost Estimating, including modeling used to predict costs, inflation estimates used for then-year dollar projections, and contracting strategy.  Financial, including the funding that will be available and provided by military services and other DOD agencies affected by the initiative to cover their respective costs, including the expected overseas base housing allowance that will be provided to military families.  Medical, including extent to which each military base or installation will have stand-alone medical treatment facilities, will share medical treatment facilities or capacity, the services provided (medical, dental, vision), dates new facilities will be available for use, ratio of primary care providers to population, and any other element that drives the number of medical treatment facilities and associated infrastructure or personnel required to support the population.  Education, including the estimated number of children per family, student distribution by grade level, tuition assistance that will be required/provided, assumptions used to develop related Department of Defense Education Activity cost factors, and any other element that drives the number of schools and associated infrastructure or personnel required to support the population.  Support Services, including capacities of commissaries, exchanges, USO, Red Cross or other support services or organizations, necessary modifications to their existing facilities, and sources of funding necessary to pay for any needed improvements or new construction.  Local Community Support, including the extent to which local business, housing, medical treatment, education, and other support services will be available and necessary to support the expected DOD population.  Host-Nation Agreements, including any specific agreements with host nations or legal issues that establish or drive specific timeframes for completion of the initiative or major components of the initiatives. S. Rep. No. 111-226, at 13-15 (2010). In addition to the contact named above, Robert L. Repasky, Assistant Director; Jeff Hubbard; Joanne Landesman; Ying Long; Greg Marchand; Richard Meeks; Charles Perdue; Lisa Reijula; Terry Richardson; Michael Shaughnessey; and Amie Steele made key contributions to this report. Defense Infrastructure: The Navy Needs Better Documentation to Support Its Proposed Military Treatment Facilities on Guam. GAO-11-206. Washington, D.C.: April 5, 2011. Defense Management: Additional Cost Information and Stakeholder Input Needed to Assess Military Posture in Europe. GAO-11-131. Washington, D.C.: February 3, 2011. Defense Planning: DOD Needs to Review the Costs and Benefits of Basing Alternatives for Army Forces in Europe. GAO-10-745R. Washington, D.C.: September 13, 2010. Defense Management: Improved Planning, Training, and Interagency Collaboration Could Strengthen DOD’s Efforts in Africa. GAO-10-794. Washington, D.C.: July 28, 2010. Defense Management: U.S Southern Command Demonstrates Interagency Collaboration, but Its Haiti Disaster Response Revealed Challenges Conducting a Large Military Operation. GAO-10-801. Washington, D.C.: July 28, 2010. National Security: Interagency Collaboration Practices and Challenges at DOD’s Southern and Africa Commands. GAO-10-962T. Washington, D.C.: July 28, 2010. Defense Infrastructure: Guam Needs Timely Information from DOD to Meet Challenges in Planning and Financing Off-Base Projects and Programs to Support a Larger Military Presence. GAO-10-90R. Washington, D.C.: November 13, 2009. Defense Infrastructure: DOD Needs to Provide Updated Labor Requirements to Help Guam Adequately Develop Its Labor Force for the Military Buildup. GAO-10-72. Washington, D.C.: October 14, 2009. Ballistic Missile Defense: Actions Needed to Improve Planning and Information on Construction and Support Costs for Proposed European Sites. GAO-09-771. Washington, D.C.: August 6, 2009. Force Structure: Actions Needed to Improve DOD’s Ability to Manage, Assess, and Report on Global Defense Posture Initiatives GAO-09-706R. July 2, 2009. Defense Infrastructure: Planning Challenges Could Increase Risks for DOD in Providing Utility Services When Needed to Support the Military Buildup on Guam. GAO-09-653. Washington, D.C.: June 30, 2009. Defense Management: Actions Needed to Address Stakeholder Concerns, Improve Interagency Collaboration, and Determine Full Costs Associated with the U.S. Africa Command. GAO-09-181. Washington, D.C.: February 20, 2009. Defense Infrastructure: Opportunity to Improve the Timeliness of Future Overseas Planning Reports and Factors Affecting the Master Planning Effort for the Military Buildup on Guam. GAO-08-1005. Washington, D.C.: September 17, 2008. Force Structure: Preliminary Observations on the Progress and Challenges Associated with Establishing the U.S. Africa Command. GAO-08-947T. Washington, D.C.: July 15, 2008. Defense Infrastructure: Planning Efforts for the Proposed Military Buildup on Guam Are in Their Initial Stages, with Many Challenges Yet to Be Addressed.GAO-08-722T. Washington, D.C.: May 1, 2008. Defense Infrastructure: Overseas Master Plans Are Improving, but DOD Needs to Provide Congress Additional Information about the Military Buildup on Guam. GAO-07-1015. Washington, D.C.: September 12, 2007. Military Operations: Actions Needed to Improve DOD’s Stability Operations Approach and Enhance Interagency Planning. GAO-07-549. Washington, D.C.: May 31, 2007. Defense Management: Comprehensive Strategy and Annual Reporting Are Needed to Measure Progress and Costs of DOD’s Global Posture Restructuring. GAO-06-852. Washington, D.C.: September 13, 2006.
The Department of Defense (DOD) is currently conducting the largest transformation of military posture in the Pacific region since the end of World War II. Transforming posture in Korea, Japan, and Guam will affect tens of thousands of military personnel and their families and require the construction of hundreds of new facilities and more than 3,500 housing units. GAO was asked to examine: (1) initiatives in Korea, their cost implications, and the basis for "tour normalization;" (2) initiatives in Japan and Guam and their cost implications; and (3) the extent to which DOD estimates the total cost of posture and addresses affordability issues. GAO assessed DOD policies and procedures, interviewed relevant DOD and State Department officials, and analyzed cost data from the military services DOD is transforming the facilities and infrastructure that support its posture in Asia without the benefit of comprehensive cost information or an analysis of alternatives that are essential to conducting affordability analysis. In South Korea, DOD is transforming its military posture through a series of four interrelated posture initiatives. GAO obtained DOD cost estimates that total $17.6 billion through 2020 for initiatives in South Korea, but DOD cost estimates are incomplete. One initiative, to extend the tour length of military service members and move thousands of dependents to South Korea--called "tour normalization"--could cost DOD $5 billion by 2020 and $22 billion or more through 2050, but this initiative was not supported by a business case analysis that would have considered alternative courses of action and their associated costs and benefits. As a result, DOD is unable to demonstrate that tour normalization is the most cost-effective approach to meeting its strategic objectives. This omission raises concerns about the investments being made in a $13 billion construction program at Camp Humphreys, where tour normalization is largely being implemented. DOD is also transforming its military posture in Japan, Okinawa, and Guam but has not estimated the total costs associated with these initiatives. Based on an October 2006 Government of Japan budget estimate study for realignment costs and limited cost information developed by DOD, GAO identified approximately $29.1 billion--primarily just construction costs--that is anticipated to be shared by the United States and Japan to implement these initiatives. DOD officials stated total cost estimates for its initiatives were not available because of the significant uncertainty surrounding initiativeimplementation schedules. The Senate Appropriations Committee recently directed DOD to provide annual status updates on posture initiatives in Korea, Japan, Guam, and the Northern Mariana Islands. If DOD is fully responsive to the Committee's reporting direction, these updates should provide needed visibility into initiative cost and funding requirements. DOD's posture planning guidance does not require the U.S. Pacific Command to include comprehensive cost data in its theater posture plan, and as a result, DOD lacks critical information that could be used by decision makers as they deliberate on posture requirements and affordability. GAO analysis shows that of the approximately $24.6 billion obligated by the military services to support installations in Asia from 2006 through 2010, approximately $18.7 billion (76 percent) was for operation and maintenance of these facilities. The services estimate that operation and maintenance costs would be about $2.9 billion per year through 2015. However, this estimate appears to be understated, and DOD's initiatives may significantly increase those costs. For example, DOD has yet to estimate costs associated with furnishing and equipping approximately 321 new buildings and 578 housing units in Okinawa. Without comprehensive and routine reporting of posture costs, DOD decision makers will not have the full fiscal context in which to develop posture plans and requirements, and congressional committees will lack a full understanding of the potential funding requirements associated with DOD budget requests. GAO recommends that DOD develop a business case analysis for its strategic objectives related to tour normalization in Korea, limit investments at Camp Humphreys until the business case is completed, and develop comprehensive cost estimates of posture in the Pacific. DOD generally agreed with GAO's recommendations, but it did not specify what corrective actions it would take or time frames for completion.
Within DOD, the military services and defense agencies are responsible for installation management, with oversight by the office of the Assistant Secretary of Defense for Energy, Installations, and Environment, who reports to the Under Secretary of Defense for Acquisition, Technology and Logistics. The office of the Assistant Secretary of Defense for Energy, Installations, and Environment is responsible for—among other things—issuing facility energy policy and guidance to DOD components and coordinating all congressional reports related to facility energy, including the Energy Reports. In addition, each military service is responsible for developing policies and managing programs related to energy and utility management, and has assigned a command or headquarters to execute these responsibilities. The defense agencies also develop policies and manage energy programs, and each has a designated senior energy official to administer their respective programs. At the installation level, the public works, general facilities, or civil engineering departments oversee and manage the day-to-day energy operations. DOD undergoes an annual process to report on energy data in its Energy Reports, collecting data required by section 2925 of Title 10 of the United States Code for the reports over a 5-month time period. The overall process, with participation by installations, military service headquarters, defense agencies, and OSD, is detailed in figure 1. Across the military services, energy security is considered critical for mission assurance. Energy security is defined by 10 U.S.C. § 2924 as having assured access to reliable supplies of energy and the ability to protect and deliver sufficient energy to meet mission essential requirements. There are multiple ways, although not all are mutually exclusive, to help ensure energy security at installations, including: Diversification of energy sources. To help ensure energy security, installations may seek to obtain energy from multiple sources to prevent reliance on a single source. This may include natural gas, petroleum, coal, and incorporation of renewable sources of energy— e.g., wind, solar, and biodiesel. Use of renewable energy. Installations may work to incorporate renewable energy sources as a way to lessen dependence on the grid, lower energy costs, and increase utility resilience in the event of an outage. For example, renewable energy may be used to power a microgrid, in which the installation can disconnect from the utility grid during an outage and run solely on the renewable energy stored. Energy redundancy. Installations may seek assured access to reliable energy through back-up energy sources that may be used in the event of an outage, such as on-site generators and power plants. Energy conservation. Installations may use energy conservation initiatives as a way to reduce energy consumption, lower energy costs, and ensure that sufficient funds are in place to meet future energy requirements. DOD installations may use one or more of these approaches to help ensure energy security. Each installation’s efforts to help ensure energy security may vary depending on its location, staff resources and funding available, and the nature of energy vulnerabilities identified. According to the U.S. Energy Information Administration, there is not a single national power grid in the United States. Instead, there are three synchronized power grids that cover the 48 contiguous states that are loosely interconnected with each other: (1) the Eastern Interconnection (serving states generally east of the Rocky Mountains), (2) the Western Interconnection (spanning the area from the Pacific Ocean to the Rocky Mountain states), and (3) a system that serves nearly all of Texas. The electricity systems in Alaska and Hawaii operate independently of the three continental grids and of each other (see fig. 2). In particular, there are several distinct electrical systems within Alaska and Hawaii that cover only portions of the states, such as the interconnections serving Anchorage, Fairbanks, and the Kenai Peninsula in Alaska and the individual islands in Hawaii. Energy-remote installations in Alaska and Hawaii face some unique differences from the installations located in the 48 contiguous states. For example, the cost of energy at energy-remote installations is high in comparison to the cost of energy at installations in the 48 contiguous states. According to the U.S. Energy Information Administration, Hawaii had the highest cost of electricity in the United States in 2013 and 2014, with the average price for commercial customers more than triple the U.S. average. Moreover, in 2013, Hawaii imported 91 percent of the energy it consumed—mostly as oil-based fuels—making it vulnerable to price fluctuations in the energy market and disruptions to the transportation of fuels. In 2013 and 2014, Alaska had the second-highest cost of electricity in the United States, with the average price for commercial customers 64 to 68 percent higher, respectively, than the U.S. average. In addition, the U.S. Energy Information Administration stated that in many areas of Alaska, commercially-supplied electricity is not available and consumers must generate their own electricity, sometimes using diesel generators, which have a high cost of operations. Given Alaska’s extreme weather environments, its energy demand per person is the third highest in the nation. Of the 12 reporting requirements for DOD’s Energy Report, our analysis showed that the department fully addressed 6, partially addressed 4, and did not address 2. The requirements fully addressed included describing actions taken to implement the energy performance master plan and energy savings realized from such actions, among other requirements. The requirements partially addressed included describing progress made to achieve three of five energy goals; a table detailing funding, by account, for all energy projects funded through appropriations; a table listing all energy projects financed through third party financing mechanisms; and details of utility outages at military installations. The requirements not addressed were information on renewable energy certificates associated with energy projects financed through third-party financing mechanisms and a description of the types and amount of financial incentives received. According to OSD officials, these requirements were not fully addressed for a number of reasons, such as inclusion of the information in another report and concerns about public release. However, DOD did not identify that the information could be found elsewhere or that it had public release concerns to clarify why it did not include required elements. Table 1 below summarizes our assessment of the extent to which DOD’s report included each of the required reporting elements. Appendix II includes our detailed evaluation of each of the required reporting elements, including the reasons OSD officials provided for any requirements that were not fully addressed. We found that the required reporting elements were not all met because OSD’s process for producing the Energy Report did not ensure this occurred. Specifically, in 2011, OSD developed its current process for collecting energy data and producing the Energy Report, including a standard format that it populates each year with updated narrative and energy data. This process, however, did not account for certain steps. For example, the process step of deciding what data to collect from the installations did not identify all data to be captured to fulfill the requirements. OSD’s guidance and template for collecting energy data did not include instructions to collect these data. As a result, OSD did not have comprehensive data to report on requirements such as financial incentives and renewable energy certificates received from utility energy service contracts and energy savings performance contracts. Additionally, OSD’s process step for consolidating specific requirements into the written report had not been reexamined in several years, resulting in some requirements remaining unaddressed. Specifically, the decisions OSD made in 2011 for consolidating requirements into the Energy Report have not been updated or examined. For example, our review of the fiscal year 2014 Energy Report, issued in May 2015, found that many of the required reporting elements that were not fully addressed in the fiscal year 2013 Energy Report were also not fully addressed in the fiscal year 2014 Energy Report. Standards for Internal Control in the Federal Government call for agencies to update internal control activities when necessary to provide reasonable assurance for effectiveness of operations and compliance with applicable laws and regulations. Without further updates or examination of OSD’s process for producing the Energy Report, DOD is at risk of future annual reports also falling short of providing congressional decision makers with a complete and accurate understanding of the extent to which DOD has fulfilled select energy performance goals. In our review of DOD’s Energy Report, we found that the underlying data correctly reflected input from the military services and defense agencies. However, DOD’s report was not fully reliable because the data and other inputs the military services and defense agencies provided were captured using different methods and thus hindered comparability. In our review of DOD’s fiscal year 2013 Energy Report, we found that the vast majority of the data and other input submitted by the military services and defense agencies were correctly reflected in the published Energy Report. Any inaccuracies we found were insignificant. Specifically, in comparing the data submitted by the military services and defense agencies to the published Energy Report, we found 2 inaccuracies out of nearly 2,000 data inputs provided. For example, DOD received information about energy consumption and cost by square footage from 705 installations and facilities. However, DOD did not include in the published report information on four facilities from the National Reconnaissance Office and one facility from the Air Force—an exclusion of less than 1 percent of the total number of installations that could have been reported. DOD responded that it chose not to include installation data for sensitivity reasons. Additionally, we found DOD incorrectly published in the Energy Report 1 out of 1,288 appropriated projects as contributing to energy efficiency goals rather than renewable energy goals. However, in July 2015, we reported on material inaccuracies in duration and cost data on utility disruptions reported in DOD’s fiscal year 2012 and 2013 Energy Reports. Regarding the duration of disruptions, we reported that three of the four military services reported some disruptions that were less than the DOD criteria of commercial utility service disruptions lasting 8 hours or longer. According to a DOD official, these disruptions constituted about 12 percent of the 266 disruptions DOD reported in the fiscal year 2012 and 2013 Energy Reports. Regarding the cost of disruptions, we reported that $4.63 million of the $7 million in utility disruption costs reported by DOD in its fiscal year 2012 Energy Report were indirect costs, such as lost productivity, although DOD had directed that such costs not be reported. We recommended, among other things, that DOD improve the effectiveness of data validation steps in its process for collecting and reporting utilities disruption data in order to improve the comprehensiveness and accuracy of certain data reported in the Energy Reports. DOD concurred with our recommendation but did not provide information on the timeline or specific actions it plans to take to implement the recommendation. To date, no action has been taken to address this recommendation but DOD stated it expects to implement the recommendation by April 2016. OSD, each of the four military services, and several defense agencies mentioned difficulties with conducting a quality data review. Specifically, officials said the timeframes were too short and resources too limited to conduct a thorough review. For example, Marine Corps officials said they scan data submitted by the installations for obvious errors, but OSD’s review process is more rigorous. Similarly, the Navy told us it relies heavily on OSD’s data reliability efforts. An OSD official and certain military services’ officials also explained that—in their limited time to validate all of the data included in the Energy Reports—they prioritize validation of certain data types, such as utilities disruption data. To conduct their review, OSD officials said that they compared the fiscal years 2012 and 2013 Energy Report data to see if there were any major differences. The officials also compared data for consistency among similar data entries, such as renewable energy consumption, that were sent by each military service and defense agency in two different workbook submissions. From this review, the officials identified specific areas of concern and sent a three-to-four page questionnaire to each of the military services and defense agencies. The officials estimated they received about a 90 percent response rate and were able to make many edits to the data. They added that their review time was too limited to correct everything that might have been inaccurate, but from their perspective any inaccuracies would most likely be statistically insignificant. In July 2015, we found that, based on our review of the fiscal year 2014 utilities resilience data submitted by the military services to OSD—and OSD’s data validation efforts—the accuracy of some of DOD’s data may be improving. This improvement, along with actions to implement our recommendation to further improve the effectiveness of data validation steps, may provide the Congress better oversight of the efforts being undertaken by DOD. We found that the military services and defense agencies captured and reported data using different methods in three areas of the Energy Report: energy consumption of tenants and hosts, energy projects, and end-of-fiscal-year data. This situation—which ultimately affects all data presented in the Energy Report—occurred because guidance was either unclear or lacking. In previous work examining, among other things, DOD’s efforts to effectively implement existing guidance, we found that clear and complete guidance is important for its effective implementation. Without collecting and reporting data using consistent methods, decision makers in DOD will be hindered in their ability to plan effectively for steps to reach energy goals, and Congress will have limited oversight of the department’s energy consumption and difficulty in comparing energy projects among those reporting. Energy consumption of tenants and hosts. At several installations, DOD components may serve as either tenants, in which they rent space from another federal agency or a private organization, or hosts, in which they lease space to other agencies or organizations. The Energy Report guidance states that a host will report energy consumption, unless there is a mutual agreement between the host and the DOD tenant to report otherwise. However, we found that limited instructions in the guidance led to different reporting methodologies among and within the military services and defense agencies regarding tenant and host energy reporting. The guidance did not state that the military services or defense agencies should identify if they were tenants or hosts at each installation, how much energy they were reporting for tenants, or if they were splitting reporting among different energy types, such as having the host report all electrical consumption but the tenant report water and petroleum consumption. For example, for facilities in which the Defense Intelligence Agency served as tenants, the facilities either reported all energy consumption or did not report any energy consumption, assuming instead that the host would report. In contrast, all tenant facilities from the Defense Commissary Agency reported energy consumption that was separately metered or billed and assumed that the host reported energy consumption that was not separately billed. As a result, it is difficult to get a clear understanding of all the data presented in the Energy Report and challenging to compare it among the installations that reported. Figures 3 and 4 identify some of the different reporting methods used by the four military services and 10 defense agencies for tenant and host energy reporting. Energy projects. The Energy Report lists energy conservation, renewable energy, and water conservation projects. However, throughout the report we found that the four military services and 10 defense agencies reported these projects inconsistently (see fig. 5) because the guidance for the Energy Report does not identify at what levels they should be reported. Entities reported energy projects by installation, facility/building, project type, funding mechanism, or other means. For example, the Navy stated that it might consolidate 10 smaller solar energy projects into 1 larger solar project for reporting purposes, whereas the Marine Corps stated that it does not track by project type but rather by installation, building, and energy type. These different methods of reporting energy projects make it difficult to clearly understand the size and scope of the projects as well as compare the projects among those reported. End-of-fiscal-year data. We found that the military services and defense agencies used a variety of methods for reporting their end-of- fiscal-year energy data—and, in some cases, installations within each military service reported their end-of-fiscal-year energy data using different methods. For example, because OSD requires data inputs by mid-November, some military services required initial submissions from the installations by mid-October, which is before some energy utility bills have been received. As a result, some installations estimate end-of-fiscal-year usage, and the estimates may be based on different factors, including previous month data, historical data, or data from a month with similar weather patterns. Additionally, because utility bills may straddle months (such as from mid-September through mid-October), some military services and installations chose to report according to the utility bills rather than the fiscal year. In contrast, some installations have meters installed and report actual usage for the fiscal year. Figure 6 identifies the different methods used by the four military services and 10 defense agencies to report end-of-fiscal- year data. In our review of actual energy consumption data from a nongeneralizable sample of installations, we found some examples of how different methods of collecting data led to different reporting results. For example, the Navy’s Joint Base Anacostia Bolling in Washington, D.C. used estimates to determine its annual energy costs. In contrast, the Defense Finance and Accounting Service, National Reconnaissance Office, and Defense Contract Management Agency each reported actual fiscal year usage, not estimates. However, the Energy Report did not annotate when estimates were used. Furthermore, installations used different approaches to estimate end-of-fiscal-year data. For example, Navy installations used previous year data to make their estimates while some Air Force installations estimated based on a specific month with similar weather patterns. As a result, the data presented throughout the Energy Report cannot be reliably compared among the military services and defense agencies. The guidance for the energy report did not identify how the military services and defense agencies should report energy data when it cannot reflect actual amounts for the full fiscal year. Additionally, the guidance did not identify how corrections can be made, if at all. For example, Navy officials told us they reported estimated consumption for all installations in the initial submission to OSD, and that although updated data was available by the December data quality review process with OSD, they were not allowed to make corrections because the estimated data had already been reviewed. By not providing guidance on how to report energy data when an installation cannot reflect actual data for the full fiscal year for the Energy Report, it is difficult to accurately compare data among the military services and defense agencies. OSD officials told us that they do not include additional instructions in the guidance for the Energy Report to the military services and defense agencies regarding energy consumption of tenants and hosts, energy projects, and end-of-fiscal-year data collection and reporting. In some cases, OSD officials stated that it would be difficult to provide guidance. For example, they stated that each installation may receive utility bills at different intervals, such as monthly or quarterly, making it challenging to provide specific guidance on how to accurately report energy consumption for the end of the fiscal year. However, currently there are no instructions that require installations to identify their end-of-fiscal-year reporting methods so that OSD, the military services, and the defense agencies can identify if different reporting intervals exist. As a result, DOD is not in a position to identify in the Energy Report where different data reporting methods were used and what data may not be comparable among the military services and defense agencies. Standards for Internal Control in the Federal Government states that information should be clearly communicated, so that users can determine whether the agency is achieving its compliance requirements. Without clear guidance for collecting and reporting data consistently, and clearly identifying where data may not be comparable and the reasons why, it will be difficult for decision makers in DOD to have reliable data to plan effectively for steps to reach energy goals, and Congress will have limited oversight of the department’s energy consumption and difficulty in comparing energy projects among those reporting. The military services are helping to ensure energy security at all installations in Alaska and Hawaii by installing multiple power sources, which can be utilized in the event of an outage, at their remote facilities. Installations that were identified as mission critical by officials had additional energy security measures in place, such as on-site power plants and uninterruptible power supplies (i.e., backup that instantly starts once the grid loses power). For example, of the 20 sites that comprise the Air Force’s Alaska Radar System, officials stated that 10 of the sites are located “off-grid” and are equipped with stand-alone power plants including redundant generation capacity. According to officials, these sites are equipped with at least one generator that can supply sufficient power generation and multiple generators to provide redundant back-up power. The officials stated that the 10 sites receiving their power from local grids are also equipped with redundant backup generators to ensure reliable power in the event of an outage. All of the Alaska Radar System locations also feature uninterruptible power supplies to ensure mission critical loads remain working. Additionally, given its mission importance, officials told us the Navy’s Pacific Missile Range Facility in Hawaii has a backup diesel generator plant that can start automatically in case of a grid failure. Furthermore, officials stated that the Army recently reached an agreement with Hawaiian Electric Company to build a 50 megawatt power plant in the interior of Oahu on Army land. According to Army officials, this new power plant could potentially provide power if a weather emergency shuts down the island’s coastal power plants. Moreover, Air Force officials in Hawaii told us that Kaena Point, a satellite tracking station, has an Air Force-owned diesel power plant onsite that provides back-up generation. This power plant is designed to start automatically when the grid goes down, and it can provide power to the base for about 40 days without refueling. In addition, officials at Joint Base Elmendorf- Richardson, Alaska, stated Fort Richardson has significant redundancy through its onsite landfill gas electrical generation plant which, in combination with back-up generators, can provide complete energy independence from the municipal electrical grid for 2 weeks in the event of an emergency (see fig. 7). We also found that the energy officials at all nine locations we visited or contacted stated they are generally prepared to respond to energy disruptions that might occur, although we found that the level of documentation for energy security planning at energy-remote locations varies across installations. An OSD Energy Policy Memorandum requires that defense managers and commanders (1) conduct energy vulnerability analyses and review for currency annually, (2) establish energy emergency preparedness and operations plans, and (3) develop and execute remedial action plans to remove unacceptable energy security risks. We found differences among installations in regard to documentation of their plans. For example, Marine Corps Base Hawaii has a full energy emergency preparedness and operations plan and remedial actions plans. Officials at U.S. Army Garrison Hawaii, by contrast, stated that the Garrison does not have any documented energy emergency preparedness and operations plans. Army officials stated the response to an energy emergency would depend on the situation, and they have the expertise to respond if needed. Officials at Eielson Air Force Base, Alaska, stated that the installation does not have a formal energy emergency preparedness and operations plan, but they receive quarterly vulnerability analyses from the inspector general’s office and have a contingency response plan in the case of a power outage. However, in cases where an installation did not have formal or specific energy security documentation, we found that the requirements of the OSD Energy Policy Memorandum were incorporated into installation-wide plans, such as continuity of operations plans. During our site visits in Alaska and Hawaii, we identified three areas of risk to energy security regarding funding, installation electricity systems, and cost. Specifically, we found that military services’ funding processes may limit energy security projects’ ability to compete for funding, the introduction of renewable energy may affect installation electricity systems, and the high cost of energy may be difficult for installations to sustain over the long term. First, we found that military services’ funding processes may limit the ability of the installations to obtain funding for energy security projects. DOD Directive 4180.01, DOD Energy Policy, states that it is DOD policy to, among other things, improve energy security and that the Deputy Undersecretary of Defense for Installations and Environment should ensure cost-effective investments are made in facility infrastructure to, among other things, enhance the power resiliency of installations. In addition, DOD Instruction 4170.11, Installation Energy Management, states that DOD components shall take necessary steps to ensure the security of energy and water resources. However, across the military services, officials told us that energy security projects do not compete well for funding because there is no clarity regarding the role that energy security plays in military service processes when evaluating a project for funding. In May 2014, we reported that the military services use “scoring” processes to consider projects for funding. During these “scoring” processes, DOD officials assign numerical values—or “points”—to certain project characteristics; potential projects’ relative scores are used to rank the projects; and senior decision makers at the military services’ headquarters review the rank-order list, selecting projects based on service priorities. However, energy security is generally not included in this list of project characteristics. In addition, since energy security projects are not identified in the decision-making processes, there is no way of knowing how many of the projects do not obtain funding. Officials at six of the nine locations we visited or contacted cited difficulty obtaining funding for energy security or would like to see dedicated funding for energy security projects. For example, officials overseeing the Air Force’s Alaska Radar System stated that they have sought military construction funding since 2002 to build a networked system of multiple fuel tanks, referred to as a tank farm, at three off-grid locations that each has only one large fuel tank. According to officials, if any of the current tanks were to fail, then the sites would lose all of their fuel for the year (see fig. 8). Officials stated that the projects would replace the large fuel tank with a multi-tank system. However, the officials said they are having difficulties obtaining funding because energy security projects do not compete well against other military construction projects, such as those for new facilities or mission-critical activities. According to officials, the tanks are now close to the point of failure. Also, an official at Marine Corps Base Hawaii stated that difficulty getting funding for aging equipment is the biggest vulnerability to the energy system. According to the official, plans to replace the aging equipment keep getting postponed in order to provide for other funding priorities. This official also noted that if older energy equipment is broken, it can be difficult to find replacements. In addition, Army officials at Fort Wainwright in Alaska stated that it is difficult to obtain military construction funding for current mission needs, including energy security projects, versus new mission needs. Navy officials at Joint Base Pearl Harbor Hickam also stated that the energy security projects they submit for funding do not compete well. For example, they said that energy security projects—which have significant infrastructure costs—do not compete well for funding against energy conservation efforts based on return on investment. Additionally, all four military services’ energy headquarters offices told us that there is no specific military service or OSD guidance or clarity on energy security funding. As a result, military service officials told us that they had difficulty incorporating energy security into funding decisions. For example, Air Force officials stated that the Air Force Civil Engineer Command wanted to allow for a tradeoff between cost effectiveness and energy security when considering a new renewable energy project that could incorporate energy security features, such as a microgrid. However, the officials said they do not yet have the right criteria to define that tradeoff and to conduct that level of decision making. Moreover, a Marine Corps Headquarters official stated that, although the Marine Corps has a process in place to identify energy security vulnerabilities and mitigating actions, it can be difficult to get funding for energy security projects because there is no DOD requirement for energy security. In other words, there is no specific DOD requirement that identifies the level of energy security an installation should have. The official further stated that energy security projects, such as a microgrid or power plant, cannot compete well against energy efficiency or renewable energy projects that have a return on investment. Army officials similarly noted that energy security projects do not compete as well as other projects for funding based on return on investment, and it would be helpful to have criteria (project characteristics) for energy security project funding consideration. The Navy has made limited efforts to incorporate energy security into funding decisions, but officials told us that the efforts are rudimentary. For example, the Navy’s energy-Return on Investment tool, which it uses to assess energy projects, considers energy security in its calculations. However, a Navy Headquarters official told us that energy security is considered a “soft benefit,” or benefit that is not the central focus of the project, and that it is difficult to fund a large project based only on soft benefits. Officials at installations told us that, without clarification of how energy security is considered in military service funding decisions, they have to try different approaches in their attempts to fund energy security projects. For example, Navy officials in Hawaii stated that they tried for 10 years to get funding for grid consolidation at the Pacific Missile Range Facility, but were not able to until it was shown that grid consolidation will allow the base to potentially build and then hook up to a landfill gas renewable energy plant. In Alaska, Air Force officials stated that difficulties obtaining military construction funding have led Air Force officials to work with attorneys at the Pacific Air Force Command to assess the viability of alternative sources of funding to build tank farms at the three off-grid Alaska Radar System locations that have only one large fuel tank each. However, as we have previously reported, alternatives to military construction funding have limitations, may vary in availability, and can be complex and time-consuming. As a result, this approach may not result in a funded project, or it may ultimately take longer than the traditional military construction process to fund a project. Without clarification of the processes the military services use to compare and prioritize projects for funding to include consideration of energy security, it will be difficult for decision makers to have sufficient information to adequately prioritize energy security projects for funding when appropriate and thus address energy security issues. Second, we found that the introduction of renewable energy sources may affect the stability of remote or small installation electricity systems, but the military services are taking some steps to address this risk. DOD Directive 4180.01 calls for the diversification and expansion of DOD energy supplies and sources, including renewable energy sources. Military service officials we spoke with generally stated that it is difficult to integrate intermittent sources of renewable energy (e.g., solar and wind power) into existing infrastructure. For example, in Hawaii, Navy and Army officials stated that because the amounts of intermittent renewable energy can vary significantly, it can cause fluctuations in power quality such as voltage and frequency on small or isolated electricity systems, which can damage equipment connected to them. These officials noted that the amount of electricity generated from solar and wind systems can vary significantly with ambient conditions such as cloud cover and wind speed. In Alaska, Air Force officials explained that many of the radar sites are in locations rated with high potential for wind turbines. However, the officials said the wind is too turbulent at these locations, such that the wind has knocked down a wind turbine prototype that was developed. Furthermore, even if wind energy generation was an option, the officials explained that because the microgrids at these sites are so small, adding wind turbines for electrical generation could cause disruptions in the electrical frequency of the grid. Despite the potential challenges with integrating renewable energy sources at energy-remote installations, officials told us that efforts are underway, including studies on the incorporation of intermittent energy sources, to continue to increase the use of renewable energy resources at these locations and mitigate the integration risks. For example, officials at Marine Corps Base Hawaii told us that they reached out to the Naval Facilities Engineering Command to conduct studies within the next year to enable the installation to incorporate its expanding production of renewable energy. The installation is currently in the process of executing a power purchase agreement for two megawatts of solar photovoltaic arrays on rooftops and car ports. Almost all of the installation housing is owned by a private developer and has solar photovoltaic panels on the rooftops. Marine Corps Base Hawaii is working on an agreement with the developer to purchase excess solar photovoltaic power generated from the housing. In addition, Marine Corps Base Hawaii is conducting a grid- modeling study—expected to be completed in a year—to see the effect of integrating solar energy into the energy system. Third, we found that the high cost of energy at remote locations may be difficult for installations to sustain over the long term and thus could affect overall mission assurance across the department, but DOD has conducted studies or taken actions to reduce costs. DOD Directive 4180.01 states that it is the department’s policy to, among other things, mitigate costs in its use and management of energy. Army officials at Fort Greely in Alaska told us that their biggest challenge is the high cost of energy and expressed concern that it may become increasingly difficult for the Army to sustain the high costs in the long term. Paying these high- cost energy bills could potentially force the military services to make tradeoffs in a constrained budgetary environment. Fort Greely officials stated that the Army hired a contractor to conduct a study to identify alternative energy solutions to lower costs and still provide energy security. Officials at Fort Wainwright also mentioned the high cost of utilities, noting that they pay $79 per ton for coal—more than double the U.S. average price for coal. They stated that it was the primary reason for hiring the same contractor as Fort Greely to identify alternative energy options for their installation as well. Both studies were completed in August 2015 and identified numerous potential energy conservation measures and recommendations. As of September 2015, senior Army officials were reviewing the recommendations to determine which to implement. In Hawaii, Navy officials told us that high oil prices in 2008 greatly increased the energy costs at Joint Base Pearl Harbor Hickam, such that the base temporarily had to shut down some facilities because the energy costs were too high. Since then, officials stated the Navy has instituted renewable energy projects and energy conservation efforts to help lower energy costs. Also, Air Force officials stated that they are concerned with the high cost of energy, which ranges from $75,000 to $100,000 per month, at Kaena Point in Hawaii, and they are working to lower costs through energy conservation efforts to help ensure access to electricity in the future. The ability of DOD to effectively manage energy at its installations is an important element of mission assurance, and comprehensive measurement of facility energy could help the department maintain an aggressive pace toward its larger energy objectives. Through its Energy Report, DOD is required to track certain energy conservation measures, investments, and performance against established goals, as well as identify certain activities to enhance energy security and resilience. However, DOD’s process for preparing the Energy Report did not ensure it addressed all the statutory requirements. In addition, while DOD has taken steps to help ensure data quality in its Energy Report, the military services and defense agencies capture and report using different methods; thus, data are not comparable. Without reexamining the process for producing the Energy Report to help ensure it fully complies with statutory requirements, providing more consistent guidance to the installations, and identifying in the Energy Report instances in which data may not be comparable among the military services and defense agencies and the reasons why, it will be difficult for decision makers in DOD to plan effectively for steps to reach energy goals, and Congress will have limited oversight of the department’s energy consumption and difficulty in comparing energy projects among those reporting. Moreover, the ability of the military services to effectively secure energy at their energy-remote installations is essential to avoid serious and potentially crippling operational impacts. The military services have taken reasonable steps, such as conducting studies on the incorporation of intermittent renewable energy sources and identifying alternative energy solutions, to overcome grid stability issues and high energy costs. However, the military services remain at risk for potentially underfunding energy infrastructure investments because there is no clarity regarding the role that energy security plays when evaluating a project for funding. Without clarifying the processes used to compare and prioritize military construction projects for funding, to include consideration of energy security as appropriate, it will be difficult for decision makers to have sufficient information to adequately prioritize energy security projects and thus address energy security issues. We recommend the Secretary of Defense take the following four actions: To better provide Congress with information needed to conduct oversight and make decisions on programs and funding, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Energy, Installations and Environment to reexamine the process for producing the Energy Report to help ensure it complies with statutory requirements, and update it as appropriate. This includes reexamining the process to include required energy goals, descriptions of energy projects funded by appropriations and third parties, details of utility outages at military installations, and a description of the types and amount of financial incentives received. In order to improve the consistency of certain data submitted by the military services and defense agencies to the Office of the Secretary of Defense and reported in the Energy Report, we recommend that the Secretary of Defense direct the secretaries of the Army, Navy, and Air Force, the Commandant of the Marine Corps, the heads of the defense agencies, and the Assistant Secretary of Defense for Energy, Installations and Environment to work together to provide more consistent guidance to the installations, including clearly stating the energy reporting requirements for tenant and host facilities, energy projects, and end-of-fiscal-year data, and identify in the Energy Report instances in which data may not be comparable among the military services and defense agencies and the reasons why. To better provide the military services with information needed to make decisions on the prioritization of funding, we recommend that the Secretary of Defense direct the secretaries of the Army, Navy, and Air Force and the Commandant of the Marine Corps to clarify the processes used to compare and prioritize military construction projects for funding, including how and when to include consideration of energy security. We provided a draft of this report for review and comment to DOD. In written comments, DOD concurred with all recommendations. DOD’s comments are summarized below and reprinted in their entirety in appendix III. DOD also provided technical comments, which we incorporated as appropriate. DOD concurred with our first recommendation to reexamine the process for producing the Energy Report to help ensure it complies with statutory requirements. In its response, DOD said the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment is already taking action to ensure the next annual energy report complies with the requirements of the recently amended section 2925 of Title 10 of the United States Code. DOD also concurred with our second and third recommendations—which, in its comments, DOD combined into one response—that DOD provide more consistent guidance to the installations for the Energy Report and identify in the Energy Report instances in which data may not be comparable among the military services and defense agencies. DOD stated that the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment will work with the military services in fiscal year 2016 to provide more consistent guidance to military installations and will identify in the fiscal year 2016 Energy Report where data may not be compatible. DOD further concurred with our final recommendation that the military services clarify the processes used to compare and prioritize military construction projects for funding, including how and when to include consideration of energy security. DOD noted that it is pursuing an update to DOD Instruction 4170.11, Installation Energy Management, and plans to include guidance to prioritize funding decisions consistent with this recommendation. If enacted, we believe that DOD’s proposed actions will aid decision makers in DOD to plan effectively for steps to reach energy goals and address energy security issues, as well as provide Congress with better oversight of the department’s energy consumption. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and the Air Force; the Assistant Secretary of Defense for Energy, Installations, and Environment; and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Brian Lepore at (202) 512-4523 or leporeb@gao.gov or Frank Rusco at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The objectives of our review were to examine the extent to which (1) the Department of Defense (DOD) addressed the 12 required reporting elements and reliably reported data in its fiscal year 2013 Annual Energy Management Report (Energy Report) and (2) the military services helped ensure energy security at energy-remote military installations in the United States. To determine the extent to which DOD addressed the 12 required reporting elements in its Energy Report, two GAO analysts independently reviewed the fiscal year 2013 Energy Report, comparing it with each element required by the law and determining whether each required reporting element was included. In the case of any conflicting determinations, a third GAO analyst adjudicated the difference. To gain a full understanding of the elements included in the Energy Report and to discuss the methodology used for collecting information and reporting on the required elements, we met with DOD officials knowledgeable about compiling information for the report, including individuals from the Office of the Secretary of Defense (OSD)—specifically, the Assistant Secretary of Defense for Energy, Installations, and Environment; the four military services; and the 10 defense agencies that contributed to the report. We also compared information in the fiscal year 2013 Energy Report to that in the fiscal year 2014 Energy Report, which was published in May 2015, to evaluate if the structure and content of each report was similar. Further, we compared OSD’s process for annually updating its Energy Report to criteria regarding updating internal control activities in Standards for Internal Control for the Federal Government. To determine the extent to which DOD reliably reported energy data in its Energy Report, we reviewed the energy data and other inputs each military service and defense agency provided to be included in the Energy Report. We looked for any anomalies in the data, such as missing data fields or numerical outliers. To examine if the data and other inputs were correctly reflected, we then compared the data and other inputs from each military service and defense agency to the published Energy Report, using as criteria GAO’s Standards for Internal Control in the Federal Government and DOD’s Annual Energy Management Report Fiscal Year 2013 Reporting Guidance. We also interviewed the officials who contributed to the report from OSD, the four military services, and the 10 defense agencies regarding how the data was collected, measures taken to assure the reliability of the data, and any anomalies observed in the data. In addition, we sent a structured questionnaire to knowledgeable officials from the four military services and 10 defense agencies to collect information about how facilities within each military service and defense agency reported energy consumption, energy projects, and September 2013 end-of-fiscal-year energy consumption data included in the Energy Report. We received responses from all of the military services and defense agencies. Additionally, as part of the questionnaire, we asked the military services and defense agencies to provide data from a nongeneralizable sample of installations regarding September 2013 energy consumption reported in the Energy Report and actual energy consumption used, as verified via utility bill or meter reading. To determine our sample, we collected a random sample of 10 installations each from the Army, Navy, Air Force, and Defense Commissary Agency; 5 installations from the Marine Corps; and all installations from the remaining defense agencies in our scope. To minimize errors that might occur from respondents interpreting our questions differently than we intended, we pre-tested the questionnaire with knowledgeable representatives from one military service (Army) and one defense agency (National Reconnaissance Office). During these pre-tests, we discussed the questions and instructions with the officials to check whether (1) the questions and instructions were clear and unambiguous, (2) the terms used were accurate, (3) the questionnaire was unbiased, and (4) the questionnaire did not place an undue burden on the officials completing it. We also submitted the questionnaire for review by an independent GAO survey specialist. We modified the questionnaire based on feedback from the pre-tests and reviews, as appropriate. To determine the extent that the military services helped ensure energy security at energy-remote military installations in the United States, we first determined the scope of energy-remote military installations by evaluating electrical interconnectedness and robustness. First, to review interconnectedness, we conducted preliminary research on the U.S. electric power system. We determined that Alaska and Hawaii have limited interconnectedness because they are not connected to the three power grids in the 48 contiguous states, which are interconnected to each other. Moreover, the electrical systems in Alaska and Hawaii are not connected to each other. Second, once we identified these states, we attempted to further narrow the scope by determining which areas in Alaska and Hawaii are less “electrically robust” (smaller number of power plants and transmission lines in the area surrounding the installation or no connectivity to transmission lines—e.g., an installation that uses diesel generators for primary power) and therefore more energy-remote. Using mapping software, we created maps of Alaska and Hawaii using layers of data (transmission lines, power plant data, and military installations location data). Additionally, we sent a questionnaire to each installation in Alaska and Hawaii to gather preliminary information, including the presence and location of the designated facility energy manager or another official who is tasked with performing the duties of the facility energy manager, the source(s) of electricity consumed on site, the amount of electricity consumed on site during fiscal year 2014, the supplier of this electricity, the existence (if any) of an energy security plan focused on utility resilience in case of an electrical disruption, whether an energy security assessment has been conducted, and whether there are plans to develop an energy security plan or conduct an energy security assessment in the future. Based on our assessment, all 26 installations in Alaska and 35 installations in Hawaii were included in our scope. Table 2 lists the locations we visited or contacted to meet with facility energy managers and the number of associated installations they oversaw. Additionally, we interviewed the facility energy managers responsible for all of the installations in Alaska and Hawaii to identify the procedures, equipment, and plans in place to ensure energy security on site, as well as any planned future energy security assessments. We compared their actions to relevant DOD and military service regulations and guidance on their roles and responsibilities regarding energy security, including DOD’s Energy Report, DOD installation energy guidance, and military service energy security guidance. We also interviewed military service officials to discuss their efforts and potential progress regarding helping to ensure energy security at energy-remote military installations. We conducted this performance audit from March 2015 to January 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Required reporting elements and GAO comments 7. An estimate of the types and quantities of energy consumed by the Department of Defense and members of the armed forces and civilian personnel residing or working on military installations during the preceding fiscal year, including a breakdown of energy consumption by user groups and types of energy, energy costs, and the quantities of renewable energy produced or procured by the Department. The report estimates the types and quantities of energy consumed, including narrative and charts outlining energy consumption by user groups, energy consumption by type, energy costs, and quantities of renewable energy produced or procured. 8. A description of the types and amount of financial incentives received under section 2913 of Title 10 of the United States Code during the preceding fiscal year and the appropriation account or accounts to which the incentives were credited. The report does not describe financial incentives. Title 10 U.S.C. §2913(c) states that “the Secretary of Defense may authorize any military installation to accept any financial incentive, goods, or services generally available from a gas or electric utility, to adopt technologies and practices that the Secretary determines are in the interests of the United States and consistent with the energy performance goals for the Department of Defense.” According to the department, section 2913 is used as the authority for DOD to enter into certain third-party- financed energy conservation projects with servicing utility companies. OSD officials stated that the financial benefit received from these arrangements is the avoidance of appropriated capital needed for project implementation. They added that utility companies provide the capital and DOD pays back the capital investment over time using the savings realized from the implemented energy conservation projects. The OSD officials further stated that the report includes information on third-party-financed utility energy service contracts. However, the report did not describe the types and amounts of financial incentives received, if any, as indicated in the required reporting element. 9. A description and estimate of the progress made by the military departments to meet the certification requirements for sustainable green-building standards in construction and major renovations as required by section 433 of the Energy Independence and Security Act of 2007 (Pub. L. No 110–140). The report states that the Department of Energy has not published the final regulation for implementing Section 433, adding that DOD will start reporting on this requirement after the Department of Energy issues the final rule. As of the time of this report, the Department of Energy had finalized regulations implementing certain parts of the rule, but other parts are still pending. 10. A description of steps taken to determine best practices for measuring energy consumption in Department of Defense facilities and installations, in order to use the data for better energy management. The report describes how the department measures energy consumption. 11. Details of utility outages at military installations including the total number and locations of outages, the financial impact of the outage, and measures taken to mitigate outages in the future at the affected location and across the Department of Defense. The report identifies the approximate number, approximate cost, and general locations of utility outages at installations. However, as we found in July 2015, DOD’s collection and reporting of utility disruption data is not comprehensive and contains inaccuracies, because not all types and instances of utility disruptions have been reported and there are inaccuracies in reporting of disruptions’ duration and cost. 12. A description of any other issues and strategies the Secretary determines relevant to a comprehensive and renewable energy policy. The department stated that there were no other relevant issues determined for reporting purposes. Fully addressed renewable energy certificates and the seventh requirement on estimating the types and quantities of energy consumed were removed. The ninth requirement on sustainable green-building standards was revised to require a description of progress toward meeting certain standards under the Unified Facilities Criteria. The eleventh requirement on utility outages was revised to require details of non- commercial utility outages and DOD-owned infrastructure. Additionally, a new requirement was added for the inclusion of a classified annex, as appropriate. In the United States, renewable energy production essentially creates two products: the energy itself and an associated commodity, called a renewable energy certificate, which represents a certain amount of energy generated using a renewable resource. Renewable energy certificates are bought and sold in a fashion similar to stocks and bonds. In addition to the contacts named above, Laura Durland (Assistant Director), Jon Ludwigson (Assistant Director), Emily Biskup, Lorraine Ettaro, Emily Gerken, Terry Hanford, Alberto Leff, Amie Lesser, John Mingus, Jodie Sandel, Erik Wilkins-McKee, and Michael Willems made key contributions to this report. Defense Infrastructure: DOD Efforts Regarding Net Zero Goals. GAO-16- 153R. Washington, D.C.: January 12, 2016. Defense Infrastructure: Improvements in DOD Reporting and Cybersecurity Implementation Needed to Enhance Utility Resilience Planning. GAO-15-749. Washington, D.C.: July 23, 2015. Energy Savings Performance Contracts: Additional Actions Needed to Improve Federal Oversight. GAO-15-432. Washington, D.C.: June 17, 2015. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Climate Change Adaptation: DOD Can Improve Infrastructure Planning and Processes to Better Account for Potential Impacts. GAO-14-446. Washington, D.C.: May 30, 2014. Clear Air Force Station: Air Force Reviewed Costs and Benefits of Several Options before Deciding to Close the Power Plant. GAO-14-550. Washington, D.C.: May 12, 2014. Climate Change: Energy Infrastructure Risks and Adaptation Efforts. GAO-14-74. Washington, D.C.: January 31, 2014. Renewable Energy Project Financing: Improved Guidance and Information Sharing Needed for DOD Project-Level Officials. GAO-12-401. Washington, D.C.: April 4, 2012. Renewable Energy: Federal Agencies Implement Hundreds of Initiatives. GAO-12-260. Washington, D.C.: February 27, 2012. Defense Infrastructure: DOD Did Not Fully Address the Supplemental Reporting Requirements in Its Energy Management Report. GAO-12-336R. Washington, D.C.: January 31, 2012. Electricity Grid Modernization: Progress Being Made on Cybersecurity Guidelines, but Key Challenges Remain to be Addressed. GAO-11-117. Washington, D.C.: January 12, 2011. Defense Infrastructure: Department of Defense’s Energy Supplemental Report. GAO-10-988R. Washington, D.C.: September 29, 2010. Defense Infrastructure: Department of Defense Renewable Energy Initiatives. GAO-10-681R. Washington, D.C.: April 26, 2010. Defense Infrastructure: DOD Needs to Take Actions to Address Challenges in Meeting Federal Renewable Energy Goals. GAO-10-104. Washington, D.C.: December 18, 2009. Defense Critical Infrastructure: Actions Needed to Improve the Identification and Management of Electrical Power Risks and Vulnerabilities to DOD Critical Assets. GAO-10-147. Washington, D.C.: October 23, 2009. Energy Savings: Performance Contracts Offer Benefits, but Vigilance Is Needed to Protect Government Interests. GAO-05-340. Washington, D.C.: June 22, 2005.
DOD is the largest energy consumer in the federal government, spending about $4.1 billion on facilities' energy at more than 500 permanent military installations throughout the world in fiscal year 2013. To help ensure oversight of DOD's fulfillment of energy performance goals, Congress requires that DOD track energy savings, investments, and projects in its annual Energy Report. The Energy Report also details DOD's activities to enhance energy security. Congress included a provision for GAO to review DOD's fiscal year 2013 Energy Report and energy security at energy-remote military installations—that is, those installations located in areas with limited connectivity and without significant infrastructure of power plants, transmission lines, or distribution lines. GAO assessed the extent to which (1) DOD addressed the 12 required reporting elements and reliably reported data in its fiscal year 2013 Energy Report and (2) the military services help ensure energy security at energy-remote military installations in the United States. GAO analyzed DOD's Energy Report and interviewed officials from the Office of the Secretary of Defense, military services, defense agencies, and all installations in Alaska and Hawaii because they were identified as energy remote. The Department of Defense's (DOD) fiscal year 2013 Annual Energy Management Report (Energy Report) addressed some of the required reporting elements and correctly incorporated data from the military services and defense agencies. However, the report is not fully reliable because the data were captured and reported using different methods, hindering comparability across the department. Specifically, the Energy Report addressed six, partially addressed four, and did not address two reporting requirements. For example, the Energy Report addressed the requirement to describe actions taken to implement DOD's energy performance master plan, partially addressed the requirement to describe progress to meet various energy goals (it described progress for three of five required goals), and did not address the requirement to describe the types and amount of financial incentives received. The Energy Report correctly reflected data provided by the military services and defense agencies. However, the military services and defense agencies used different methods for capturing and reporting on data in the Energy Report such as on energy consumption and projects. These inconsistencies resulted from guidance that was either unclear or lacking. For example, DOD did not provide guidance on reporting end-of-fiscal-year energy data; thus, the military services and defense agencies used different reporting methods. Without clear guidance for reporting data consistently, it will be difficult for DOD to have reliable data to plan effectively to reach energy goals, and Congress will have limited oversight of DOD's energy consumption and difficulty in comparing energy projects. The military services generally help ensure energy security (the ability to continue missions in the event of a power outage) at their energy-remote military installations in Alaska and Hawaii by providing access to multiple power sources. However, GAO identified areas of risk to energy security regarding installation electricity systems, high energy costs, and funding. GAO found that the military services addressed some risks by conducting studies on integrating renewable energy into electricity systems and identifying alternative energy solutions to lower costs. However, military service efforts to incorporate energy security into funding decisions have been limited. The processes to evaluate projects for funding generally do not consider energy security in prioritizing those to receive funding, and officials from all four military services stated that there is no military service or DOD guidance related to evaluating projects for funding that focuses on energy security. As a result, six of the nine locations GAO visited in Alaska and Hawaii cited difficulty obtaining funding for energy security projects. For example, officials at the Air Force's Alaska Radar System said they have sought funding since 2002 to build a networked system of multiple fuel tanks at three off-grid locations that each have only one fuel tank, but they said energy security projects do not compete well against other projects, such as those for new facilities. Navy officials similarly stated that energy security projects—which have significant infrastructure costs—do not compete well for funding against energy conservation efforts based on return on investment. Without clarification of the processes used to compare and prioritize projects for funding to include consideration of energy security, it will be difficult for decision makers to have sufficient information to adequately prioritize energy security projects for funding when appropriate and thus address energy security issues. GAO recommends, among other things, that DOD revise its guidance for producing the Energy Report and clarify funding processes to include consideration of energy security. DOD concurred with all recommendations.
In 1971, the Atomic Energy Commission, NRC’s predecessor, promulgated the first regulations for fire protection at commercial nuclear power units in the United States. These regulations––referred to as General Design Criterion 3––provided basic design requirements and broad performance objectives for fire protection, but lacked implementation guidance or assessment criteria. As such, NRC generally deemed a unit’s fire protection program to be adequate if it complied with standards set by the National Fire Protection Association (NFPA)––an international organization that promotes fire prevention and safety––and received an acceptable rating from a major fire insurance company. However, at that time the fire safety requirements for commercial nuclear power units were similar to those for conventional, fossil-fueled power units. NRC and nuclear industry officials did not fully perceive that fires could threaten a nuclear unit’s ability to safely shut down until 1975, when a candle that a worker at Browns Ferry nuclear unit 1 was using to test for air leaks in the reactor building ignited electrical cables. The resulting fire burned for 7 hours and damaged more than 1,600 electrical cables, more than 600 of which were important to unit safety. Nuclear unit workers eventually used water to extinguish the fire, contrary to the existing understanding of how to put out an electrical fire. The fire damaged electrical power, control systems, and instrumentation cables and impaired cooling systems for the reactor. During the fire, operators could not monitor the unit normally. NRC’s investigation of the Browns Ferry fire revealed deficiencies in the design of fire protection features at nuclear units and in procedures for responding to a fire, particularly regarding safety concerns that were unique to nuclear units, such as the ability to protect redundant electrical cables and equipment important for the safe shutdown of a reactor. In response, NRC developed new guidance in 1976 that required units to take steps to isolate and protect at least one system of electrical cables and equipment to ensure a nuclear unit could be safely shut down in the event of a fire. NRC worked with licensees throughout the late 1970s to help them meet this guidance. In November 1980, NRC published two new sets of regulations to formalize the regulatory approach to fire safety. First, NRC required all nuclear units to have a fire protection plan that satisfies General Design Criteria 3 and that describes an overall fire protection program. Second, NRC published Appendix R, which requires nuclear units operating prior to January 1, 1979 (called “pre-1979 units”), to implement design features—such as fire walls, fire wraps, and automatic fire detection and suppression systems—to protect a redundant system of electrical cables and equipment necessary to safely shut down a nuclear unit during a fire. Among other things, Appendix R requires units operating prior to 1979 to protect one set of cables and equipment necessary for safe shutdown through one of the following means:1. Separating the electrical cables and equipment necessary for safe shutdown by a horizontal distance of more than 20 feet from other systems, with no combustibles or fire hazards between them. In addition, fire detectors and an automatic fire suppression system (for example, a sprinkler system) must be installed in the fire area. 2. Protecting the electrical cables and equipment necessary for safe shutdown by using a fire barrier able to withstand a 3-hour fire, as conducted in a laboratory test (thereby receiving a 3-hour rating). 3. Enclosing the cable and equipment necessary for safe shutdown by using a fire barrier with a 1-hour rating and combining that with automatic fire detectors and an automatic fire suppression system. If a nuclear unit’s fire protection systems do not satisfy those requirements or if redundant systems required for safe shutdown could be damaged by fire suppression activities, Appendix R requires the nuclear unit to maintain an alternative or dedicated shutdown capability and its associated circuits. Moreover, Appendix R requires all units to provide emergency lighting in all areas needed for operating safe shutdown equipment. Nuclear units that began operating on or after January 1, 1979 (called “post-1979 units”) must satisfy the broad requirements of General Design Criteria 3 but are not subject to the requirements of Appendix R. However, NRC has imposed or attached conditions similar to the requirements of Appendix R to these units’ operating licenses. When promulgating these regulations, NRC recognizes that strict compliance for some older units would not significantly enhance the level of fire safety. In those cases, NRC allows nuclear units licensed before 1979 to apply for an exemption to Appendix R. The exemption depends on if the nuclear unit can demonstrate to NRC that existing or alternative fire protection features provided safety equivalent to those imposed by the regulations. Since 1981, NRC has issued approximately 900 unit-specific exemptions to Appendix R. Nuclear units licensed after 1979 can apply for “deviations” against their licensing conditions. Many exemptions take the form of NRC-approved operator manual actions, whereby nuclear unit staff manually activate or control unit operations from outside the unit’s control room, such as manually stopping a pump that malfunctions during a fire and could affect a unit’s ability to safely shut down. NRC also allows nuclear units to institute, in accordance with their NRC-approved fire protection program, “interim compensatory measures”—temporary measures that units can take without prior approval to compensate for equipment that needs to be repaired or replaced. Interim compensatory measures often consist of roving or continuously staffed fire watches that occur while nuclear units take corrective actions. In part to simplify the licensing of nuclear units that have many exemptions, NRC recently began encouraging units to transition to a more risk-informed approach to nuclear safety in general. In 2004, NRC promulgated 10 C.F.R. 50.48(c), which allows––but does not require–– nuclear units to adopt a risk-informed approach to fire protection. The risk-informed approach considers the probability of fires in conjunction with a unit’s engineering analysis and operating experience. The NRC rule allows licensees to voluntarily adopt and maintain a fire protection program that meets criteria set forth by the NFPA’s fire protection standard 805— which describes the risk-informed approach endorsed by NRC—as an alternative to meeting the requirements or unit-specific fire- protection license conditions represented by Appendix R and related rules and guidance. Nuclear units that choose to adopt the risk-informed approach must submit a license amendment request to NRC asking NRC to approve the unit’s adoption of the new risk-informed, regulatory approach. NRC is overseeing a pilot program at two nuclear unit locations and expects to release its evaluation report on these programs by March 2009. NRC officials told us that none of the 125 fires at 54 sites that nuclear unit operators reported from January 1995 to December 2007 has posed significant risk to a commercial unit’s ability to safely shut down. No fires since the 1975 Browns Ferry fire have threatened a nuclear unit’s ability to safely shut down. Most of the 125 fires occurred outside areas that are considered important for safe shutdown of the unit or happened during refueling outages when nuclear units were already shut down. Nuclear units categorized 13 of the 125 reported fires as “alerts” under NRC’s Emergency Action Level rating system, meaning that the reported situation involved an actual or potential substantial degradation of unit safety, but none of the fires actually threatened the safe shutdown of the unit. NRC further characterizes alerts as providing early and prompt notification of minor events that could lead to more serious consequences. As shown in the table 1, the primary reported causes of these fires were electrical fires. Nuclear units classified the remaining 112 reported fires in categories that do not imply a threat to safe shutdown. Specifically, 73 were characterized as being “unusual events”––a category that is less safety-significant than “alerts”––and 39 fires as being “non-emergencies.” No reported fire event rose to the level of “site area emergency” or “general emergency”—the two most severe ratings in the Emergency Action Level system. As shown in table 2 below, about 41 percent of the 125 reported fires were electrical fires, 14 percent were maintenance related, 7 percent were caused by oil-based lubricants or insulation, and the remaining 38 percent either had no reported causes or the causes were listed as “other,” including brush fires, cafeteria grease fires, and lightning. We also gathered information on fire events that had occurred at nuclear unit sites we visited. NRC’s data on the location and circumstances surrounding fire events was consistent with the statements of unit officials whom we contacted at selected nuclear units. Although unit officials told us that some recent fires necessitated the response of off-site fire departments to supplement the units’ on-site firefighting capabilities, they confirmed that none of the fires adversely affected the units’ ability to safely shut down. Additionally, officials at two units told us that, although fires affected the units’ auxiliary power supply, the events caused both units to “trip”—an automatic power down as a precaution in emergencies. NRC has not fully resolved several long-standing issues that affect the commercial nuclear industry’s compliance with existing NRC fire regulations. These issues include (1) nuclear units’ use of operator manual actions; (2) nuclear units’ long-term use of interim compensatory measures; (3) uncertainties regarding the effectiveness of fire wraps for protecting electrical cables necessary for the safe shutdown of a nuclear unit; and (4) the regulatory treatment of fire-induced multiple spurious actuations of equipment that could prevent the safe shutdown of a nuclear unit. Moreover, NRC lacks a central system of records that would enhance its ability to oversee and address the use of operator manual actions and extended interim compensatory measures, among other related issues. According to an NRC Commissioner, the current “patchwork of requirements” is characterized by too many exemptions, as well as by unapproved or undocumented operator manual actions. He said the current regulatory situation was not the ideal, transparent, or safest way to deal with the issue of fire safety. NRC’s oversight of fire safety is complicated by nuclear units’ use of operator manual actions that NRC has not explicitly approved. NRC’s initial Appendix R regulations required that nuclear units protect at least one redundant system—or “train”—of equipment and electrical cables required for a unit’s safe shutdown through the use of fire protection measures, such as 1-hour or 3-hour fire barriers, 20 feet of separation between redundant systems, and automatic fire detection and suppression systems. The regulations do not list operator manual actions as a means of protecting a redundant system from fire. However, according to NRC officials and NRC’s published guidance, units licensed before January 1979 can receive approval for a specific operator manual action by applying for a formal exemption to the regulations. For example, unit officials at one site told us they rely on 584 operator manual actions that are approved by 15 NRC exemptions for safe shutdown. (NRC allows units to submit multiple operator manual actions under one exemption.) Units licensed after January 1979 may use operator manual actions for fire protection if these actions are permitted by the unit’s license and if the unit can demonstrate that the actions will not adversely affect safe shutdown. NRC and nuclear unit officials told us that units have been using operator manual actions since Appendix R became effective in 1981. These officials added that a majority of nuclear units that use operator manual actions started using them beginning in the mid-1990s in response to the failure of Thermo-Lag––a widely used fire wrap––to meet fire endurance testing. A lack of clear understanding between NRC and industry over the permissible use of operator manual actions in lieu of passive measures emerged over the years. For example, officials at several of the sites we visited produced documentation––some dating from the 1980s––showing NRC’s documented approval of some, but not all, operator manual actions. In some other cases, unit operators told us that NRC officials verbally approved certain operator manual actions but did not document their approval in writing. In some other instances, without explicit NRC approval, unit officials applied operator manual actions that NRC had previously approved for similar situations. NRC officials explained that NRC inspectors may not have cited units for violations for these operator manual actions because they believed the actions were safe; however, NRC’s position is that these actions do not comply with NRC’s fire regulations. Moreover, in fire inspections initiated in 2000 of nuclear units’ safe shutdown capabilities, NRC found that units were continuing to use operator manual actions without exemptions in lieu of protecting safe shutdown capabilities through the required passive measures. For example, management officials for some nuclear units authorized staff to manually turn a valve to operate a pump if it failed due to fire damage rather than protecting the cables that operate the valve automatically. Unit officials at one site stated that they rely on more than 20 operator manual actions that must be implemented within 25 minutes for safe shutdown in the event of a fire. In March 2005 NRC published a proposal to revise Appendix R to allow feasible and reliable operator manual actions if units maintained or installed automatic fire detection and suppression systems. The agency stated that this would reduce the regulatory burden by decreasing the need for licensees to prepare exemption requests and the need for NRC to review and approve them. However, industry officials stated, among other things, that the requirement for suppression would be costly without a clear safety enhancement and, therefore, would likely not reduce the number of exemption requests. Officials at one unit told us that this requirement, in conjunction with other NRC proposed rules, could cost as much as $12 million at one unit, and they believe that the rule would have caused the industry to submit a substantial number of exemption requests to NRC. Due in part to these concerns, NRC withdrew the proposed rule in March 2006. NRC officials reaffirmed the agency’s position that nuclear units using unapproved or undocumented operator manual actions are not in compliance with regulations. In published guidance sent to all operating nuclear units in 2006, NRC stated that this has been its position since Appendix R became effective in 1981. The guidance further stated that NRC has continued to communicate this position to licensees via various public presentations, proposed rulemaking, and industry wide communications. In June 2006, NRC directed nuclear units to complete corrective actions for these operator manual actions by March 2009, either by applying for licensing exemptions for undocumented or unapproved operator manual actions or by making design modifications to the unit to eliminate the need for operator manual actions. Staff at most nuclear units we visited said they would resolve this issue either by transitioning to the new risk- informed approach, or by applying to NRC for licensing exemptions because making modifications would be resource-intensive. In March 2006, NRC also stated in the Federal Register that the regulations allow licensees to use the risk-informed approach in lieu of seeking an exemption or license amendment. NRC officials told us that, at least for the short-term, they have no plans to examine unapproved or undocumented operator manual actions for units that have sought exemptions to determine if these units are compliant with regulations. They said that NRC has already received exemption requests for operator manual actions, and it expects about 25 units–– mostly units licensed before 1979 that do not intend to adopt the new risk- informed approach—to submit additional exemption requests by March 2009. They estimated that about half of the 58 units that have not decided to transition to the risk-informed approach do not have compliance issues regarding operator manual actions and, therefore, will not need to submit related requests for exemptions. These officials anticipate that the remaining units that are not transitioning to the risk-informed approach will submit exemptions in the following two broad groups: (1) license amendment requests that should be short and easy to process because the technical review has already been completed, showing that the operator manual actions in place do not degrade unit safety; and (2) exemption requests that require more detailed review because the units have been using unapproved operator manual actions. Some nuclear units have used interim compensatory measures for extended periods of time—in some cases, for years—rather than perform the necessary repairs or procure the necessary replacements. As of April, 2008, NRC has no firm plans for resolving this problem. For example, at one nuclear unit we visited, unit officials chose to use fire watches for over 5 years instead of replacing faulty penetration seals covering openings in structural fire barriers. Officials at several units told us that they typically use fire watches with dedicated unit personnel as interim compensatory measures whenever they have deficiencies in fire protection features. NRC regional officials confirmed that most interim compensatory measures are currently fire watches and that many of these were implemented at nuclear units after tests during the 1980s and 1990s determined that Thermo-Lag and, later, Hemyc fire wraps, used to protect safe shutdown cables from fire damage, were deficient. According to a statement released by an NRC commissioner in October 2007, interim compensatory measures are not the most transparent or safest way to deal with this issue. Moreover, NRC inspectors have reported weaknesses in certain interim compensatory measures used at some units, including an over reliance on 1-hour roving fire watches rather than making the necessary repairs. Although NRC regulations state that all deficiencies in fire protection features must be promptly identified and corrected, they do not limit how long units can rely on interim compensatory measures—such as hourly fire watches—before taking corrective actions or include a provision to compel licensees to take corrective actions. In the early 1990s, NRC issued guidance addressing the timeliness of corrective actions, stating that the agency expected units to promptly complete all corrective actions in a timely manner commensurate with safety and thus eliminate reliance on the interim compensatory measures. In 1997, NRC issued additional guidance, stating that if a nuclear unit does not resolve a corrective action at the first available opportunity or does not appropriately justify a longer completion schedule, the agency would conclude that corrective action has not been timely and would consider taking enforcement action. NRC’s current guidance for its inspectors states that a unit may implement interim compensatory measures until final corrective action is completed and reliance on an interim compensatory measure for operability should be an important consideration in establishing the time frame for completing the corrective action. This guidance further states that conditions calling for interim compensatory measures to restore operability should be resolved quickly because such conditions indicate a greater degree of degradation or nonconformance than conditions that do not rely on interim compensatory measures. For example, the guidance states that NRC expects interim compensatory measures that substitute an operator manual action for automatic safety-related functions to be resolved expeditiously. Officials from several different units that we visited confirmed that NRC has not implemented a standard timeframe for when corrective actions must be made regarding safe shutdown deficiencies. NRC officials further state that interim compensatory measures could remain in place at some units until they fully transition to the risk- informed approach to fire protection. They stated that this was because many of the interim compensatory measures are in place for Appendix R issues that are not risk significant, and nuclear units will be able to eliminate them after they implement the risk-informed approach. NRC has not resolved uncertainty regarding fire wraps used at some nuclear units for protecting cables critical for safe shutdown. NRC’s regulations state that fire wraps protecting shutdown-related systems must have a fire rating of either 1 or 3 hours. NRC guidance further states that licensees should evaluate fire wrap testing results and related data to ensure it applies to the conditions under which they intend to install the fire wraps. If all possible configurations cannot be tested, an engineering analysis must be performed to demonstrate that cables would be protected adequately during and after exposure to fire. NRC officials told us that the agency prefers passive fire protection, such as fire barriers—including fire wraps—because such protection is more reliable than other forms of fire protection, for example, human actions for fire protection. Following the 1975 fire at Browns Ferry, manufacturers of fire wraps performed or sponsored fire endurance tests to establish that their fire wraps met either the 1-hour or 3-hour rating period required by NRC regulations. However, NRC became concerned about fire wraps in the late 1980s when Thermo-Lag—a fire wrap material commonly used in units at the time—failed performance tests to meet its intended 1-hour and 3-hour ratings, even though it had originally passed the manufacturer’s fire qualification testing. In 1992, NRC’s Inspector General found that NRC and nuclear licensees had accepted qualification test results for Thermo-Lag that were later determined to be falsified. From 1991 to 1995, NRC issued a series of information notices on performance test failures and installation deficiencies related to Thermo-Lag fire wrap systems. As a result, in the early 1990s, NRC issued several generic communications informing industry of the test results and requested that licensees implement appropriate interim compensatory measures and develop plans to resolve any noncompliance. One such communication included the expectation that licensees would review other fire wrap materials and systems and consider actions to avoid problems similar to those identified with Thermo-Lag. Deficiencies emerged in other fire wrap materials starting in the early 1990s, and NRC suggested that industry conduct additional testing. It took NRC over 10 years to initiate and complete its program of large-scale testing of Hemyc—another commonly used fire wrap—and then direct units to take corrective actions after small-scale test results first indicated that Hemyc might not be suitable as a 1-hour fire wrap. In 1993, NRC conducted pilot-scale fire tests on several fire wrap materials, but because the tests were simplified and small-scale models were used, NRC applied test results for screening purposes only. These tests involved various fire wraps assembled in different configurations. The test results indicated unacceptable performance in approximately one-third of the assemblies tested, and NRC reported that the results for Hemyc were inconclusive, although NRC’s Inspector General recently reported that Hemyc had failed this testing. In 1999 and 2000, several NRC inspection findings raised concerns about the performance of Hemyc and MT—another fire wrap— including: (1) whether test acceptance criteria for insurance purposes is valid for fire barrier endurance tests and (2) the performance of fire wraps when those wraps are used in untested configurations. In 2001, NRC initiated testing for typical Hemyc and MT installations used in units in the United States, and the test results indicated that the Hemyc configuration did not pass the 1-hour criteria and that the MT configuration did not pass the 3-hour criteria. In 2005, NRC held a public meeting with licensees to discuss these test results and how to achieve compliance. In 2006, NRC published guidance stating that fire wraps installed in configurations that are not capable of providing the designed level of protection are considered nonconforming installations and that licensees that use Hemyc and MT—previously accepted fire wraps—may not be conforming with their licenses. This guidance further stated that if licensees identify nonconforming conditions, they may take the following corrective actions: (1) replace the failed fire wraps with an appropriately rated fire wrap material, (2) upgrade the failed fire barrier to a rated barrier, (3) reroute cables or instrumentation lines through another fire area, or (4) voluntarily transition to the risk-informed approach to fire protection. According to NRC’s Inspector General, during testimony before Congress in 1993 on the deficiencies of Thermo-Lag, the then-NRC Chairman committed NRC to assess all fire wraps to determine what would be needed in order to meet NRC requirements. The testimony also contained an attachment of an NRC task force that made the following two recommendations: (1) NRC should sponsor new tests to evaluate the fire endurance characteristics of other fire wraps and (2) NRC should review the original fire qualification test reports from fire wrap manufacturers. Although NRC maintains that it has satisfied this commitment, the NRC Inspector General reported in January 2008 that the agency had yet to complete these assessments. NRC officials told us that licensees are required to conduct endurance tests on fire wraps used at nuclear units; however, the NRC Inspector General noted that, to date, no test has been conducted certifying Hemyc as a 1- or 3- hour fire wrap. Licensees’ proposed resolutions for this problem ranged from making replacements with another fire wrap material to requesting license exemptions. In addition, although NRC advised licensees that corrective actions associated with Hemyc and MT are subject to future inspection, the Inspector General noted that NRC has not yet scheduled or budgeted for inspections of licensees’ proposed resolutions. The Inspector General’s report indicated that several different fire wraps failing endurance tests are still installed at units across the country, but NRC does not maintain current records of these installations. Until issues regarding the effectiveness of fire wraps are resolved, utilities may not be able to use the wraps to their potential and instead rely on other measures, including operator manual actions. NRC has not finalized guidance on how nuclear units should protect against short-circuits that could cause safety-related equipment to start or malfunction spuriously (instances called spurious actuations). In the early 1980s, NRC issued guidance clarifying the requirements in its regulations for safeguarding against spurious actuations that could adversely affect a nuclear unit’s ability to safely shut down. However, NRC approved planning for spurious actuations occurring only one at a time or in isolation. In the late 1990s, nuclear units identified problems related to multiple spurious actuations occurring simultaneously. Due to uncertainty over this issue, in 1998 NRC exempted units from enforcement actions related to spurious actuations, and in 2000 the agency temporarily suspended the electrical circuit analysis portion of its fire inspections at nuclear units. Cable fire testing performed by industry in 2001 demonstrated that multiple spurious actuations occurring simultaneously or in rapid succession without sufficient time to mitigate the consequences may have a relatively high probability of occurring under certain circumstances, including fire damage. Following the 2001 testing, NRC notified units that it expects them to plan for protecting electrical systems against failures due to fire damage, including multiple spurious actuations in both safety-related systems and associated nonsafety systems. NRC resumed electrical inspections in 2005 and proposed that licensees review their fire protection programs to confirm compliance with NRC’s stated regulatory position on this issue and report their findings in writing. The proposal suggested that noncompliant units could come into compliance by (1) reperforming their circuit analyses and making necessary design modifications, (2) performing a risk-informed evaluation, or (3) adopting the overall risk- informed approach to fire protection advocated by NRC. In 2006, however, NRC decided not to issue the proposal, stating that further thought and care can be taken to ensure the resolution of this issue has a technically sound and traceable regulatory footprint that would provide permanent closure. The nuclear industry has issued statements disagreeing with NRC’s proposed regulatory approach for multiple spurious actuations. Industry officials noted that NRC approved licenses for many units that require operators to plan for spurious actuations from a fire event that occur one at a time or in isolation and that NRC’s current approach amounts to a new regulatory position on this issue. Furthermore, the industry asserts that units only need to plan for protecting against spurious actuations occurring one at a time or in isolation because, in industry’s view, multiple spurious actuations occurring are highly improbable and should not be considered in safety analyses. Industry officials told us that the 2001 test results were generated under worst-case scenarios, which operating experience has shown may not represent actual conditions at nuclear units. These officials further told us that NRC’s requirements are impossible to achieve. In December 2007, the nuclear industry proposed an approach for evaluating the effects on circuits from two or more spurious actuations occurring simultaneously, but NRC had not officially commented on the proposal as of May 2008. NRC has stated that draft versions of the proposal it has reviewed do not achieve regulatory compliance. As of May 2008, despite numerous meetings and communications with industry, NRC has not endorsed guidance or developed a timeline for resolving disagreements with industry about how to plan for multiple spurious actuations of safety-related equipment due to fire damage. However, NRC officials told us they have recently developed a closure plan for this issue that they intend to propose to NRC’s Commissioners for approval in June 2008. NRC officials told us that after this plan is approved, their planned next steps are to determine (1) the analysis tools, such as probabilistic risk assessments or fire models, that units can use to analyze multiple spurious actuations; and (2) a time frame for ending its ongoing exemption of units from enforcement actions related to spurious actuations. NRC has no comprehensive database of the operator manual actions or interim compensatory measures implemented at nuclear units since its regulations were first promulgated in 1981, in addition to the hundreds of related licensing exemptions. NRC does not require units to report operator manual actions upon which they rely for safe shutdown. Although NRC reports operator manual actions in the inspection reports it generates through its triennial fire inspections, it does not track these operator manual actions industrywide nor does it compile them on a unit by unit basis. NRC does not maintain a central database of interim compensatory measures being used in place of permanent fire protection features at units for any duration of time. In addition, NRC regional officials told us that triennial fire inspectors do not typically track the status of interim compensatory measures used for fire protection or which units are using them. However, units record maintenance-related issues in their corrective action programs, including those issues requiring the implementation of interim compensatory measures. As a result, data are available to track interim compensatory measures that last for any period of time as well as to analyze their safety significance. NRC resident inspectors told us that they review these corrective action programs on a daily basis and that they are always aware of the interim compensatory measures in place at their units. They reported that this information is sometimes reviewed by NRC regional offices but rarely by headquarters officials. NRC officials explained that the agency tracked the use of exemptions— including some operator manual actions—through 2001 but then stopped because the number of exemptions requested by units decreased. This information is available, in part, electronically through its public documents system and partly in microfiche format. These officials explained that part of the agency’s inspection process is to test if licensees have copies of their license exemptions and, thus, are familiar with their own licensing basis. Inspectors have the ability to confirm an exemption, but once the inspectors are in the field, they often rely on the licensee’s documentation. According to these officials, NRC has no central repository for all the exemptions for a unit, but agency inspectors can easily validate a licensee’s exemption documentation by looking it up in their public documents system. They said that they conduct the triennial inspections over 2 weeks at the unit because they realize licensees may not be able to locate documentation immediately. They notify licensees what documents they need during the first week onsite so the licensees can have time to prepare them for NRC’s return trip. NRC regional officials told us that it is difficult to inspect fire safety due to the complicated licensing basis and inability to track documents. An NRC commissioner told us that nuclear power units have adopted many different fire safety practices with undocumented approval status. The commissioner further stated that NRC does not have good documentation of which units are using interim compensatory measures or operator manual actions for fire protection and that it needs a centralized database to track these issues. The commissioner stated the lack of a centralized database does not necessarily indicate that safety has been compromised. However, without a database that contains information about the existence, length, nature, and safety significance of interim compensatory measures, operator manual actions, and exemptions in general, NRC may not have a way to easily track which units have had significant numbers of extended interim compensatory measures and possibly unapproved operator manual actions. Moreover, the database could help NRC make informed decisions about how to resolve these long-standing issues. Also, the database could help NRC inspectors more easily determine whether specific operator manual actions or extended interim compensatory measures have, in fact, been approved through exemptions. Officials at 46 nuclear units have announced their intention to adopt the risk-informed approach to fire safety. Officials from NRC, industry, and units we visited that plan to adopt the risk-informed approach stated that they expect the new approach will make units safer by reducing reliance on unreliable operator manual actions and help identify areas of the unit where multiple spurious actuations could occur. Academic and industry experts believe that the risk-informed approach could provide safety benefits, but they stated that NRC must address inherent complexities and unknowns related to the development of probabilistic risk assessments used in the risk-informed approach. Furthermore, the shortage of skilled personnel and concerns about the potential cost of conducting risk analyses could slow the transition process and limit the number of units that ultimately make the transition to the new approach. As of May 2008, 46 nuclear units at 29 sites have announced that they will transition to the risk-informed approach endorsed by NRC (see fig. 1). To facilitate the transition process for the large number of units that will change to the new approach within the next 5 years, NRC is overseeing a pilot program involving three nuclear units at the Oconee Nuclear Power Plant in South Carolina and one unit at the Shearon Harris Nuclear Power Plant in North Carolina, and NRC expects to release its evaluation of these units’ license amendment requests supporting their transition to the risk- informed approach by March 2009. At that point, 22 nuclear units will have submitted their license amendment requests for NRC’s review, followed by other units in a staggered fashion. NRC and transitioning unit officials we spoke with expected that transitioning to the new approach could simplify nuclear units’ licensing bases by reducing the number of future exemptions significantly at each unit. Furthermore, officials from each of the 12 units we contacted that plan to adopt the approach said that one of the main reasons for their transition is to reduce the number of exemptions, including those involving operator manual actions, that are required to ensure safe shutdown capability under NRC’s existing regulations. Specifically, these officials told us that they expected that conducting fire modeling and probabilistic risk assessments—aspects of the risk-informed approach— would allow the nuclear units to demonstrate that fire protection features in an area with shutdown-related systems would be acceptable based on the expected fire risk in that area. According to some of these officials, under these circumstances units would no longer need to use exemptions—including those involving operator manual actions—to demonstrate compliance with the regulations. Officials at 10 of the units we visited stated that, as a result, the approach could eliminate the need for some operator manual actions. For example, officials at one site that contained two nuclear units expected that by transitioning to the new risk- informed approach, the units could eliminate the need for over 1,200 operator manual actions currently in place. Other unit officials conceded that the outcomes of probabilistic risk assessments may demonstrate the need for new operator manual actions that are currently not required under the current regulations. These officials added that any new actions or other safety features could be applied only to those areas subject to fire risk, rather than to the entire facility, thereby allowing units to maximize resources. According to nuclear unit officials, adopting the risk-informed approach could also help resolve concerns about multiple spurious actuations that could occur as a result of fire events. Officials from six units we visited told us that conducting the probabilistic risk assessments would allow them to identify where multiple spurious actuations are most likely to occur and which circuit systems would be most likely affected. These officials told us that limiting circuit analyses to the most critical areas would make such analyses feasible. NRC has repeatedly promoted the transition to the new risk informed approach as a way for nuclear units to address the multiple spurious actuation issue. According to industry officials and academic experts we consulted, the results of a probabilistic risk assessment used in the risk-informed approach could help units direct safety resources to areas where risk from accidents could be minimized or where the risk of damage to the core or a unit’s safe shutdown capability is highest; however, officials also noted that the absence of significant fire events since the 1975 Browns Ferry fire limits the relevant data on fire events at nuclear units. Specifically, these experts noted the following: Probabilistic risk assessments require large amounts of data; therefore the small number of fires since the Browns Ferry fire and the subsequent lack of real-world data may increase the amount of uncertainty in the analysis. Probabilistic risk assessments are limited by the range of scenarios that practitioners include in the analysis. If a scenario is not examined, its risks cannot be considered and mitigated. The role of human performance and error in a fire scenario—especially those scenarios involving operator manual actions—is difficult to model. Finally, these parties stated that probabilistic risk assessments in general are difficult for a regulator to review and are not as enforceable as a prescriptive approach, in which compliance with specific requirements can be inspected and enforced. Numerous NRC, industry, and academic officials we spoke with expressed concern that the transition to the new risk-informed approach could be delayed by a limited number of personnel with the necessary skills and training to design, review, and inspect against probabilistic risk assessments. Several nuclear unit officials told us that the pool of fire protection engineers with expertise in these areas is already heavily burdened with developing probabilistic risk assessments for the pilot program units and other units, including the 38 units that had already begun transitioning as of October 2007. Academic experts, consultants, and industry officials told us that the current shortage of skilled personnel is due to (1) an increased demand for individuals with critical skills under the risk-informed approach and (2) a shortage of academic programs specializing in fire protection engineering. According to these experts and officials, the current number of individuals skilled in conducting probabilistic risk assessments is insufficient to handle the increased work expected to be generated by the transition to a risk-informed approach. NRC officials we spoke with expressed concern that the nuclear industry has not trained or developed sufficient personnel with needed fire protection skills. These officials also told us that they expect that, as demand for work increases, more engineering students will choose to go into the fire protection field. However, to date, only one university has undergraduate and graduate programs in the fire protection engineering field, and the ability to produce graduates is limited. Other officials we spoke with noted that engineers in other fields can be trained in fire protection but that this training takes a significant amount of time. Academic experts and industry officials stated that without additional skilled personnel, units would not be able to perform all of the necessary activities, especially probabilistic risk assessments, within the 3-year enforcement discretion “window” that NRC has granted each transition unit as an incentive to adopt the new approach. Most nuclear units that responded to an industry survey on this issue indicated that they expected that they will need NRC to extend the discretion deadline for each unit. Delays in individual units’ transition processes could create a significant backlog in the entire transition process. NRC also faces an aging workforce and the likelihood that it will be competing with industry for engineers with skills in the fire protection area. As we reported in January 2007, the agency as a whole faces significant human capital challenges, in part because approximately 33 percent of its workforce will be eligible to retire in 2010. To address this issue, we reported that NRC identified several critical skill gaps that it must address, such as civil engineering and operator licensing. In relation to needed skill areas, the agency has taken steps, including supporting key university programs, to attract greater numbers of students into mission- critical skills areas and to offer scholarships to those studying in these fields. In relation to fire protection, and probabilistic risk assessments in particular, NRC officials told us that they expect to address future resource needs through the use of a multiyear budget and by contracting with the Department of Energy’s National Laboratories to help manage the process. Further, these officials stated that part of the purpose of the pilot program is to help them determine future resource needs for the transition to the risk-informed approach, and, as a result, they do not intend to finalize resource planning until the pilot programs are complete. A number of experts in the engineering field, including academics and fire engineers, stated that it will be difficult for NRC to compete with industry over the projected numbers of graduates in this field over the next few years. Also, NRC’s total workload, in addition to fire protection, is expected to increase as nuclear unit operators submit license applications to build new units, extend the lives of existing units, or increase the generating capacity of existing units. For example, NRC staff are currently reviewing license applications for units at six sites and have recently announced that operators have submitted licenses for two additional units at a seventh site. The agency expects to review or receive 12 more applications during 2008. To date 58 of the nation’s 104 nuclear units have not announced whether they will adopt the risk-informed approach. NRC and industry officials stated that they expected that newer units and units with relatively few exemptions from existing regulations would be less likely to transition to the new approach, while those with older licenses and extensive exemptions would make the transition. However, to date, 25 units licensed prior to 1979 have yet to announce whether they will make the transition. Officials from nontransitioning units we visited told us that concerns over NRC’s guidance and time table have been key reasons why they have not yet announced their intent to transition. According to industry and nuclear unit officials we spoke with, the costs associated with conducting fire probabilistic risk assessments for the units may be too high to justify transitioning to the new approach. For example, some officials told us that performing the necessary analysis of circuits and fire area features in support of the probabilistic risk assessment could cost millions of dollars without substantially improving fire safety. These officials noted that both pilot sites currently expect to spend approximately $5 million to $10 million each in transition costs, including circuit analysis. Some of these officials also noted that updating probabilistic risk assessments—which units are required to do every 3 years or whenever any significant changes are made to a unit—would require units to dedicate staff to this effort on a long term or permanent basis. Officials at transition and nontransition units stated that NRC’s guidance for developing fire models that support probabilistic risk assessments is overly conservative. In effect, these models require engineers to assume that fires will result in massive damage, burn for significant periods of time, and require greater response and mitigation efforts than less conservative models. As such, these officials stated that the fire models provided by NRC guidance would not provide an accurate assessment of risk at a given unit. Furthermore, these officials stated that unit modifications required by the risk analysis could cost more than seeking exemptions from NRC. Some of these officials stated that they expect NRC to revise the probabilistic risk assessment guidance to facilitate the transition process in the future. NRC officials told us that nuclear units have the option to develop and conduct their own fire models rather than follow NRC’s guidance. Furthermore, in its initial review of one of the pilot unit’s probabilistic risk assessments, NRC agreed with industry that models used in the development of the probabilistic risk assessment contained some overly conservative aspects and recommended that the unit conduct additional analysis to address this. However, nuclear unit officials expressed concern that the costs of developing site-specific fire models, a process that includes numerous iterations, could be prohibitive. Nuclear industry officials identified another area of concern in the current transition schedule, in which 22 units are expected to submit their license amendment requests for the risk-informed approach before NRC finishes assessing the license amendment requests for the pilot program units in March 2009. Although NRC has established a steering committee and a frequently asked question process to disseminate information learned in the ongoing pilot programs to other transition units, a number of nuclear unit officials expressed concern about beginning the transition process before the transition pilot programs are complete and lessons learned from the pilot programs are available. For example, an official at one of the pilot sites noted that the success of the pilot program probably will not be known until after the first triennial safety inspection conducted by NRC, which will occur after March 2009. The transition project manager for two nonpilot transition units expressed his opinion that, due to uncertainties regarding the work units must perform in order to comply with the risk-informed standard, no unit should commit itself to transitioning to the new approach until 2 years after the completion of the pilot programs. NRC’s ability to regulate fire safety at nuclear power units has been adversely affected by several long-standing issues. To its credit, NRC has required that nuclear units come into compliance with requirements related to the use of unapproved operator manual actions by March 2009. However, NRC has not effectively resolved the long-term use of interim compensatory measures or the possibility of multiple spurious actuations. Especially critical, in our opinion, is the need for NRC to test and resolve the effectiveness of fire wraps at nuclear units, because units have instituted many manual actions and compensatory measures in response to fire wraps that were found lacking in effectiveness in various tests. Compounding these issues, NRC has no central database of exemptions, operator manual actions, and extended interim compensatory measures. Such a system would allow it to track trends in compliance, devise solutions to compliance issues, and help provide important information to NRC’s inspection activities. Unless NRC deals effectively with these issues, units will likely continue to postpone making necessary repairs and replacements, choosing instead to rely on unapproved or undocumented manual actions as well as compensatory measures that, in some cases, continue for years. According to NRC, nuclear fire safety can be considered to be degraded when reliance on passive measures is supplanted by manual actions or compensatory measures. By taking prompt action to address the unapproved use of operator manual actions, long-term use of interim compensatory measures, the effectiveness of fire wraps, and multiple spurious actuations, NRC would provide greater assurance to the public that nuclear units are operated in a way that promotes fire safety. Despite the transition of 46 units to a new risk-informed approach, for which the implementation timeframes are uncertain, the majority of the nation’s nuclear units will remain under the existing regulatory approach, and the long-standing issues will continue to apply directly to them. To address long-standing issues that have affected NRC’s regulation of fire safety at the nation’s commercial nuclear power units, we recommend that the NRC Commissioners direct NRC staff to take the following four actions: Develop a central database for tracking the status of exemptions, compensatory measures, and manual actions in place nationwide and at individual commercial nuclear units. Address safety concerns related to extended use of interim compensatory defining how long an interim compensatory measure can be used and identifying the interim compensatory measures in place at nuclear units that exceed that threshold, assessing the safety significance of such extended compensatory measures and defining how long a safety-significant interim compensatory measure can be used before NRC requires the unit operator to make the necessary repairs or replacements or request an exemption or deviation from its fire safety requirements, and, developing a plan and deadlines for units to resolve those compensatory measures. Address long-standing concerns about the effectiveness of fire wraps at commercial nuclear units by analyzing the effectiveness of existing fire wraps and undertaking efforts to ensure that the fire endurance tests have been conducted to qualify fire wraps as NRC-approved 1- or 3-hour fire barriers. Address long-standing concerns by ensuring that nuclear units are able to safeguard against multiple spurious actuations by committing to a specific date for developing guidelines that units should meet to prevent multiple spurious actuations. We provided a draft of this report to the Commissioners of the Nuclear Regulatory Commission for their review and comment. In commenting on a draft of this report, NRC found that it was accurate, complete, and handled sensitive information appropriately and stated that it intends to give GAO’s findings and conclusions serious consideration. However, in its response, NRC did not provide comments on our recommendations. NRC’s comments are reprinted in appendix II. We are sending copies of this report to the Commissioners of the Nuclear Regulatory Commission, the Nuclear Regulatory Commission’s Office of the Inspector General, and interested congressional committees. We will also make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or gaffiganm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To examine the number, causes, and reported safety significance of fire incidents at nuclear reactor units since 1995, we analyzed Nuclear Regulatory Commission (NRC) data on fires occurring at operating commercial nuclear reactor units from January, 1995, to December, 2007. NRC requires units to report fire events meeting certain criteria, including fires lasting longer than 15 minutes or those threatening safety. To assess the reliability of the data, we (1) interviewed NRC officials about the steps they take to ensure the accuracy of the data; (2) confirmed details about selected fire events, NRC inspection findings, and local emergency responders with unit management officials and NRC resident inspectors during site visits to nuclear power units; (3) reviewed NRC inspection reports related to fire protection; and (4) checked the data for obvious errors. We determined that the data were sufficiently reliable for the purposes of this report. To examine what is known about nuclear reactor units’ compliance with NRC’s deterministic fire protection regulations, we reviewed the relevant fire protection regulations and guidance from NRC and industry. We also met with and reviewed documents provided by officials from NRC, industry, academia, and public interest groups. In particular, we interviewed officials from NRC’s Fire Protection Branch, Office of Enforcement, four regional offices, Office of the Inspector General, and Advisory Committee on Reactor Safeguards. In addition, we interviewed officials from the Nuclear Energy Institute, National Fire Protection Association, nuclear industry consultants, and nuclear insurance companies. We conducted site visits to nuclear power units, where we met with unit management officials and NRC resident inspectors. During these site visits, we discussed and received documentation on the use of operator manual actions, interim compensatory measures, and fire wraps, and we obtained views on multiple spurious actuations and their impact on safe shutdown. We also reviewed and discussed each unit’s corrective action plan. Finally, we observed multiple NRC public meetings and various collaborations with industry concerning issues related to compliance with NRC’s deterministic fire protection regulations. To examine the status of the nuclear industry’s implementation of the risk- informed approach to fire safety advocated by NRC, we met with and reviewed documents provided by officials from NRC, industry, and public interest groups, as well as academic officials with research experience in fire safety and risk analysis. In particular, we interviewed officials from NRC’s Fire Protection Branch, Office of Enforcement, four regional offices, Office of the Inspector General, and Advisory Committee on Reactor Safeguards. We also interviewed officials from the Nuclear Energy Institute, National Fire Protection Association, nuclear industry consultants, and nuclear insurance companies. We conducted site visits to nuclear power units, where we met with unit management officials and NRC resident inspectors. During these site visits, we discussed and received documentation on the risk-informed approach to fire safety, including resource planning and analysis justifying decisions on whether or not to transition to NFPA-805. We also observed multiple NRC public meetings and collaborations with industry concerning issues related to the risk-informed approach to fire safety. Finally, we reviewed relevant fire protection regulations and guidance from NRC and industry. In addressing each of our three objectives, we conducted visits to sites containing one or more commercial nuclear reactor units. These visits allowed us to obtain in-depth knowledge about fire protection at each site. We selected a nonprobability sample of sites to visit because certain factors—including custom designs that differ according to each nuclear unit, hundreds of licensing exemptions and deviations in place at units nationwide, and the geographic dispersal of units units across 31 states— complicate collecting data and reporting generalizations about the entire population of units. We chose 10 sites (totaling 20 operating nuclear reactor units out of a national total of 104 operating nuclear units) that provided coverage of each of NRC’s four regional offices and that represented varying levels of unit fire safety performance, unit licensing characteristics, reactor types, and NRC oversight. At the time of our visits, 5 of the 10 sites we visited (totaling 10 of the 20 nuclear reactor units we visited) had notified NRC that they intend to transition to the new risk- informed approach to fire safety. Over the course of our work, we visited the following sites: (1) D.C. Cook (2 units), located near Benton Harbor, Michigan; (2) Diablo Canyon (2 units), located near San Luis Obispo, California; (3) Dresden (2 units), located near Morris, Illinois; (4) Indian Point (2 units), located near New York, New York; (5) La Salle (2 units), located near Ottawa, Illinois; (6) Nine Mile Point (2 units), located near Oswego, New York; (7) Oconee (3 units), located near Greenville, South Carolina; (8) San Onofre (2 units), located near San Clemente, California; (9) Shearon Harris (1 unit), located near Raleigh, North Carolina; and (10) Vogtle (2 units), located near Augusta, Georgia. We selected the nonprobability sample from the entire population of commercial nuclear power units currently operating in the United States. In order to capture variations that could play a role in how these units address fire safety, we designed our site visit selection criteria to represent the following: (1) geographic diversity; (2) units licensed to operate before and after 1979; (3) sites choosing to remain under the deterministic regulations and those transitioning to the risk-informed approach; (4) pressurized and boiling water reactor types; (5) a variety of safety problems in which inspection findings or performance indicators of higher risk significance (white, yellow, or red) were issued; (6) units that have been subjected to at least some level of increased oversight since regular fire inspections were initiated in 2000; and (7) sites with various numbers of fires reportable to NRC since 1995. We received feedback on our selection criteria from nuclear insurance company officials, nuclear industry consultants, NRC officials, and academic officials with research experience in fire safety and risk analysis. We interviewed NRC resident inspectors and unit management officials at each site to learn about the fire protection program at the site. We also observed fire protection features at each site, including safe-shutdown equipment and areas of the units where operator manual actions, interim compensatory measures, and fire wraps are used for fire safety. Finally, we observed part of an NRC triennial fire inspection at one site. We conducted this performance audit from September 2007 to June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Ernie Hazera (Assistant Director), Cindy Gilbert, Chad M. Gorman, Mehrzad Nadji, Omari Norman, Alison O’Neill, Steve Rossman, and Jena Sinkfield made key contributions to this report. Nuclear Energy: NRC Has Made Progress in Implementing Its Reactor Oversight and Licensing Processes but Continues to Face Challenges. GAO-08-114T. Washington, D.C.: October 3, 2007. Nuclear Energy: NRC’s Workforce and Processes for New Reactor Licensing are Generally in Place, but Uncertainties Remain as Industry Begins to Submit Applications. GAO-07-1129. Washington, D.C.: September 21, 2007. Human Capital: Retirements and Anticipated New Reactor Applications Will Challenge NRC’s Workforce. GAO-07-105. Washington, D.C.: January 17, 2007. Nuclear Regulatory Commission: Oversight of Nuclear Power Plant Safety Has Improved, but Refinements Are Needed. GAO-06-1029. Washington, D.C.: September 27, 2006. Nuclear Regulatory Commission: Preliminary Observations on Its Process to Oversee the Safe Operation of Nuclear Power Plants. GAO-06-888T. Washington, D.C.: June 19, 2006. Nuclear Regulatory Commission: Preliminary Observations on Its Oversight to Ensure the Safe Operation of Nuclear Power Plants. GAO-06-886T. Washington, D.C.: June 15, 2006. Nuclear Regulatory Commission: Challenges Facing NRC in Effectively Carrying Out Its Mission. GAO-05-754T. Washington, D.C.: May 26, 2005. Nuclear Regulation: Challenges Confronting NRC in a Changing Regulatory Environment. GAO-01-707T. Washington, D.C.: May 8, 2001. Major Management Challenges and Performance Risks: Nuclear Regulatory Commission. GAO-01-259. Washington, D.C.: January 2001. Fire Protection: Barriers to Effective Implementation of NRC’s Safety Oversight Process. GAO/RCED-00-39. Washington, D.C.: April 19, 2000. Nuclear Regulation: Regulatory and Cultural Changes Challenge NRC. GAO/T-RCED-00-115. Washington, D.C.: March 9, 2000. Nuclear Regulatory Commission: Strategy Needed to Develop a Risk- Informed Safety Approach. GAO/T-RCED-99-071. Washington, D.C.: February 4, 1999.
After a 1975 fire at the Browns Ferry nuclear plant in Alabama threatened the unit's ability to shut down safely, the Nuclear Regulatory Commission (NRC) issued prescriptive fire safety rules for commercial nuclear units. However, nuclear units with different designs and different ages have had difficulty meeting these rules and have sought exemptions to them. In 2004, NRC began to encourage the nation's 104 nuclear units to transition to a less prescriptive, risk-informed approach that will analyze the fire risks of individual nuclear units. GAO was asked to examine (1) the number and causes of fire incidents at nuclear units since 1995, (2) compliance with NRC fire safety regulations, and (3) the transition to the new approach. GAO visited 10 of the 65 nuclear sites nationwide, reviewed NRC reports and related documentation about fire events at nuclear units, and interviewed NRC and industry officials to examine compliance with existing fire protection rules and the transition to the new approach. According to NRC, all 125 fires at 54 of the nation's 65 nuclear sites from January 1995 through December 2007 were classified as being of limited safety significance. According to NRC, many of these fires were in areas that do not affect shutdown operations or occurred during refueling outages, when nuclear units are already shut down. NRC's characterization of the location, significance, and circumstances of those fire events was consistent with records GAO reviewed and statements of utility and industry officials GAO contacted. NRC has not resolved several long-standing issues that affect the nuclear industry's compliance with existing NRC fire regulations, and NRC lacks a comprehensive database on the status of compliance. These long-standing issues include (1) nuclear units' reliance on manual actions by unit workers to ensure fire safety (for example, a unit worker manually turns a valve to operate a water pump) rather than "passive" measures, such as fire barriers and automatic fire detection and suppression; (2) workers' use of "interim compensatory measures" (primarily fire watches) to ensure fire safety for extended periods of time, rather than making repairs; (3) uncertainty regarding the effectiveness of fire wraps used to protect electrical cables necessary for the safe shutdown of a nuclear unit; and (4) mitigating the impacts of short circuits that can cause simultaneous, or near-simultaneous, malfunctions of safety-related equipment (called "multiple spurious actuations") and hence complicate the safe shutdown of nuclear units. Compounding these issues is that NRC has no centralized database on the use of exemptions from regulations, manual actions, or compensatory measures used for long periods of time that would facilitate the study of compliance trends or help NRC's field inspectors in examining unit compliance. Primarily to simplify units' complex licensing, NRC is encouraging nuclear units to transition to a risk-informed approach. As of April 2008, some 46 units had stated they would adopt the new approach. However, the transition effort faces significant human capital, cost, and methodological challenges. According to NRC, as well as academics and the nuclear industry, a lack of people with fire modeling, risk assessment, and plant-specific expertise could slow the transition process. They also expressed concern about the potentially high costs of the new approach relative to uncertain benefits. For example, according to nuclear unit officials, the costs to perform the necessary fire analyses and risk assessments could be millions of dollars per unit. Units, they said, may also need to make costly new modifications as a result of these analyses.
Consumer adoption of mobile devices is growing rapidly, enabled by affordable prices, increasingly reliable connections, and faster transmission speeds. According to a recent analysis, mobile devices are the fastest growing consumer technology, with worldwide sales increasing from 300 million in 2010 to an estimated 650 million in 2012. Advances in computing technology have resulted in increased speed and storage capacity for mobile devices. The advances have enhanced consumers’ abilities to perform a wide range of online tasks. While these devices provide many productivity benefits to consumers and organizations, they also pose security risks if not properly protected. Several different types of private sector entities provide products and services that are used by consumers as part of a seamless mobile telecommunications system. These entities include mobile device manufacturers, operating system developers, application developers, and wireless carriers. WiFi and Bluetooth are commonly used technologies that allow an electronic device to exchange data wirelessly (using radio waves) with other devices and computer networks. recent report,January 2012. Google Inc. led the development of Android, an operating system for mobile devices, based on the Linux operating system. Android, like Linux, is an “open” operating system, meaning that its software code is publicly available and can be tailored to the needs of individual devices and telecommunications carriers. Thus, many different tailored versions of the software are in use. To run on Android devices, software applications need to be digitally signed by the developer, who is responsible for the application’s behavior. Android applications are made available on third-party application marketplaces, websites, and on the online official Android application store called Google Play. Research In Motion, Corp. Research In Motion developed a proprietary operating system for its BlackBerry mobile devices. Although a proprietary system, it can run any third-party applications that are written in Java. Applications are tested by Research In Motion before users can download them. In addition, any application given access to sensitive data or features when installed is required by Research In Motion to be digitally signed by the developer. BlackBerry applications are available for download on the online store called BlackBerry App World. Mobile application developers develop the software applications that consumers interact with directly. In many cases, these applications provide the same services that are available through traditional websites, such as news and information services, online banking, shopping, and electronic games. Other applications are designed to take into account a user’s physical location to provide tailored information or services, such as information about nearby shops, restaurants, or other elements of the physical environment. Wireless carriers manage telecommunications networks and provide phone services, including mobile devices, directly to consumers. While carriers do not design or manufacture their own mobile devices, in some cases they can influence the design and the features of other manufacturers’ products because they control sales and interactions with large numbers of consumers. Major wireless carriers with the largest total market shares in the United States include Verizon Wireless, AT&T Inc., Sprint, and T-Mobile USA Inc. Carriers provide basic telephone service through wireless cellular networks which cover large distances. However, other types of shorter- range wireless networks may also be used with mobile devices. These shorter-range networks may be supported by the same carriers or by different providers. Major types of wireless networks include cellular networks, WiFi networks, and wireless personal area networks. Cellular networks. Cellular networks are managed by carriers and provide coverage based on dividing a large geographical service area into smaller areas of coverage called “cells.” The cellular network is a radio network distributed over the cells and each cell has a base station equipped with an antenna to receive and transmit radio signals to mobile phones within its coverage area. A mobile device’s communications are generally associated with the base station of the cell in which it is located. Each base station is linked to a mobile telephone switching office, which is also connected to the local wireline telephone network. The mobile phone switching office directs calls to the desired locations, whether to another mobile phone or a traditional wireline telephone. This office is responsible for switching calls from one cell to another in a smooth and seamless manner as consumers change locations during a call. Figure 1 depicts the key components of this cellular network. WiFi networks. WiFi networking nodes may be established by businesses or consumers to provide networking service within a limited geographic area, such as within a home, office, or place of business. They are generally composed of two basic elements: access points and wireless-enabled devices, such as smart phones and tablet computers. These devices use radio transmitters and receivers to communicate with each other. Access points are physically wired to a conventional network and provide a means for wireless devices to connect to them. WiFi networks conform to the Institute of Electrical and Electronics Engineers 802.11 standards. Other wireless personal area networks. Other wireless personal area networks may be used that do not conform to the WiFi standard. For example, the Bluetooth standard is often used to establish connectivity with nearby components, such as headsets or computer keyboards. While federal agencies are not responsible for ensuring the security of individual mobile devices, several are involved in activities designed to address and promote cybersecurity and mobile security in general. The Department of Commerce (Commerce) is responsible under in coordination with Homeland Security Presidential Directive 7other federal and nonfederal entities, for improving technology for cyber systems and promoting efforts to protect critical infrastructure. Within Commerce, the National Institute of Standards and Technology (NIST) is responsible for developing information security standards and guidelines, including minimum requirements for unclassified federal information systems, as part of its statutory responsibilities under the Federal Information Security Management Act (FISMA). For example, NIST has developed guidelines on cellphone and Bluetooth security. These standards and guidelines are generally made available to the public and can be used by both the public and private sectors. NIST also serves as the lead federal agency for coordinating the National Initiative for Cybersecurity Education (NICE) with other agencies. According to NIST, NICE seeks to establish an operational, sustainable, and continually improving cybersecurity education program for the nation. NICE includes an awareness initiative, which is led by the Department of Homeland Security (DHS), which focuses on boosting national cybersecurity awareness through public service campaigns to promote cybersecurity and responsible use of the Internet, and making cybersecurity a popular educational and career pursuit for older students. As we previously reported, NIST developed a draft strategic plan for the NICE initiative. This plan includes strategic goals, supporting objectives, and related activities for the awareness component. Specifically, the draft strategic plan calls for (1) improving citizens’ knowledge to allow them to make smart choices as they manage online risk, (2) improving knowledge of cybersecurity within organizations so that resources are well applied to meet the most obvious and serious threats, and (3) enabling access to cybersecurity resources. The plan also identifies supporting activities and products designed to support the overarching goal, such as the “Stop. Think. Connect.” awareness campaign. According to Commerce’s National Telecommunications and Information Administration (NTIA), it serves as the President’s principal adviser on telecommunications policies pertaining to economic and technological advancement and to the regulation of the telecommunications industry, including mobile telecommunications. NTIA is responsible for coordinating telecommunications activities of the executive branch and assisting in the formulation of policies and standards for those activities, including considerations of interoperability, privacy, security, spectrum use, and emergency readiness. Federal law and policy tasks DHS with critical infrastructure protection responsibilities that include creating a safe, secure, and resilient cyber environment in conjunction with other federal agencies, other levels of government, international organizations, and industry. The National Strategy to Secure Cyberspace tasked DHS as the lead agency in promoting a comprehensive national awareness program to empower Americans to secure their own parts of cyberspace. Consistent with that tasking, DHS is currently leading the awareness component of NICE. The Federal Communications Commission’s (FCC) role in mobile security stems from its broad authority to regulate interstate and international communications, including for the purpose of “promoting In addition, FCC has established the safety of life and property.” Communications, Security, Reliability, and Interoperability Council (CSRIC). CSRIC is a federal advisory committee whose mission is to provide recommendations to FCC to help ensure, among other things, secure and reliable communications systems, including telecommunications, media, and public safety. A previous CSRICincluded a working group that was focused on identifying cybersecurity best practices (including mobile security practices), and had representation from segments of the communications industry and public safety communities. The current CSRIC has focused on the development and implementation of best practices related to several specific cybersecurity topics. FCC has also established a Technological Advisory Council, which includes various working groups, one of which has been working since March 2012 to identify, prioritize, and analyze mobile security and privacy issues. The Federal Trade Commission (FTC) promotes competition and protects the public by, among other things, bringing enforcement actions against entities that engage in unfair or deceptive acts or practices. An unfair act is an act or practice that causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and is not outweighed by countervailing benefits to consumers or to competition. A deceptive act or practice occurs if there is a representation, omission, or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment. According to FTC, its authority to bring enforcement actions covers many of the entities that provide mobile products and services to consumers, including mobile device manufacturers, operating system developers, and application developers. FTC’s jurisdiction also extends to wireless carriers when they are not engaged in common carrier activities. For example, mobile phone operators engaging in mobile payments functions such as direct-to-carrier billing are under FTC’s jurisdiction. The Department of Defense (DOD) is responsible for security systems, including mobile devices that use its networks or contain DOD data. While it has no responsibility with regards to consumer mobile devices, its guidance can be useful for consumers. For example, the DOD Security Technical Implementation Guides are available to the public. These guides contain technical guidance to secure information systems or software that might otherwise be vulnerable to a malicious computer attack. In addition, certain guides address aspects of mobile device security. The Office of Management and Budget (OMB) is responsible for overseeing and providing guidance to federal agencies on the use of information technology, which can include mobile devices. One OMB memorandum to federal agencies, for example, instructs agencies to properly safeguard information stored on federal systems (including mobile devices) by requiring the use of encryption and a “time-out” function for re-authentication after 30 minutes of inactivity. Threats to the security of mobile devices and the information they store and process have been increasing significantly. Many of these threats are similar to those that have long plagued traditional computing devices connected to the Internet. For example, cyber criminals and hackers have a variety of attack methods readily available to them, including using software tools to intercept data as they are transmitted to and from a mobile device, inserting malicious software code into the operating systems of mobile devices by including it in seemingly harmless software applications, and using e-mail phishing techniques to gain access to mobile-device users’ sensitive information. The significance of these threats, which are growing in number and kind, is magnified by the vulnerabilities associated with mobile devices. Common vulnerabilities in mobile devices include a failure to enable password protection, the lack of the capability to intercept malware, and operating systems that are not kept up to date with the latest security patches. Cyber-based attacks against mobile devices are evolving and increasing. Examples of recent incidents include: In May 2012, a regulatory agency in the United Kingdom fined a company for distributing malware versions of popular gaming applications that triggered mobile devices to send costly text messages to a premium-rate telephone number. In February 2012, a cybersecurity firm, Symantec Corporation, reported that a large number of Android devices in China were infected with malware that connected them to a botnet.operator was able to remotely control the devices and incur charges on user accounts for premium services such as sending text messages to premium numbers, contacting premium telephony services, and connecting to pay-per-view video services. The number of infected devices able to generate revenue on any given day ranged from 10,000 to 30,000, enough to potentially net the botnet’s operator millions of dollars annually if infection rates were sustained. In January 2012, an antivirus company reported that hackers had subverted the search results for certain popular mobile applications so that they would redirect users to a web page where they were encouraged to download a fake antivirus program containing malware. In October 2011, FTC reached a settlement of an unfair practice case with a company after alleging that its mobile application was likely to cause consumers to unwittingly disclose personal files, such as pictures and videos, stored on their smartphones and tablet computers. The company had configured the application’s default settings so that upon installation and set-up it would publicly share users’ photos, videos, documents, and other files stored on those devices. Symantec Corporation, Internet Security Threat Report, 2011 Trends Vol.17 (Mountain View, Calif.: April 2012). an estimated half million to one million people had malware on their Android devices in the first half of 2011; and 3 out of 10 Android owners are likely to encounter a threat on their device each year as of 2011. According to a networking technology company, Juniper Networks, malware aimed at mobile devices is increasing. For example, the number of variants of malicious software, known as “malware,” aimed at mobile devices has reportedly risen from about 14,000 to 40,000, a 185 percent increase in less than a year. Figure 2 shows the increase in malware variants between July 2011 and May 2012. The increasing prevalence of attacks against mobile devices makes it important to assess and understand the nature of the threats they face and the vulnerabilities these attacks exploit. Mobile devices face a range of cybersecurity threats. These threats can be unintentional or intentional. Unintentional threats can be caused by software upgrades or defective equipment that inadvertently disrupt systems. Intentional threats include both targeted and untargeted attacks from a variety of sources, including botnet operators, cyber criminals, hackers, foreign nations engaged in espionage, and terrorists. These threat sources vary in terms of the capabilities of the actors, their willingness to act, and their motives, which can include monetary gain or political advantage, among others. For example, cyber criminals are using various attack methods to access sensitive information stored and transmitted by mobile devices. Table 1 summarizes those groups or individuals that are key sources of threats for mobile devices. These threat sources may use a variety of techniques, or exploits, to gain control of mobile devices or to access sensitive information on them. Common mobile attacks are presented in table 2. Attacks against mobile devices generally occur through four different channels of activities: Software downloads. Malicious applications may be disguised as a game, device patch, or utility, which is available for download by unsuspecting users and provides the means for unauthorized users to gain unauthorized use of mobile devices and access to private information or system resources on mobile devices. Visiting a malicious website. Malicious websites may automatically download malware to a mobile device when a user visits. In some cases, the user must take action (such as clicking on a hyperlink) to download the application, while in other cases the application may download automatically. Direct attack through the communication network. Rather than targeting the mobile device itself, some attacks try to intercept communications to and from the device in order to gain unauthorized use of mobile devices and access to sensitive information. Physical attacks. Unauthorized individuals may gain possession of lost or stolen devices and have unauthorized use of mobile devices and access sensitive information stored on the device. Mobile devices are subject to numerous security vulnerabilities, including a failure to enable password protection, the inability to intercept malware, and operating systems that are not kept up to date with the latest security patches. While not a comprehensive list of all possible vulnerabilities, the following 10 vulnerabilities can be found on all mobile platforms. Mobile devices often do not have passwords enabled. Mobile devices often lack passwords to authenticate users and control access to data stored on the devices. Many devices have the technical capability to support passwords, personal identification numbers (PIN), or pattern screen locks for authentication. Some mobile devices also include a biometric reader to scan a fingerprint for authentication. However, anecdotal information indicates that consumers seldom employ these mechanisms. Additionally, if users do use a password or PIN they often choose passwords or PINs that can be easily determined or bypassed, such as 1234 or 0000. Without passwords or PINs to lock the device, there is increased risk that stolen or lost phones’ information could be accessed by unauthorized users who could view sensitive information and misuse mobile devices. Two-factor authentication is not always used when conducting sensitive transactions on mobile devices. According to studies, consumers generally use static passwords instead of two-factor authentication when conducting online sensitive transactions while using mobile devices. Using static passwords for authentication has security drawbacks: passwords can be guessed, forgotten, written down and stolen, or eavesdropped. Two-factor authentication generally provides a higher level of security than traditional passwords and PINs, and this higher level may be important for sensitive transactions. Two-factor refers to an authentication system in which users are required to authenticate using at least two different “factors”—something you know, something you have, or something you are—before being granted access. Mobile devices themselves can be used as a second factor in some two-factor authentication schemes. The mobile device can generate pass codes, or the codes can be sent via a text message to the phone. Without two-factor authentication, increased risk exists that unauthorized users could gain access to sensitive information and misuse mobile devices. Wireless transmissions are not always encrypted. Information such as e-mails sent by a mobile device is usually not encrypted while in transit. In addition, many applications do not encrypt the data they transmit and receive over the network, making it easy for the data to be intercepted. For example, if an application is transmitting data over an unencrypted WiFi network using hypertext transfer protocol (http) (rather than secure http), the data can be easily intercepted. When a wireless transmission is not encrypted, data can be easily intercepted by eavesdroppers, who may gain unauthorized access to sensitive information. Mobile devices may contain malware. Consumers may download applications that contain malware. Consumers download malware unknowingly because it can be disguised as a game, security patch, utility, or other useful application. It is difficult for users to tell the difference between a legitimate application and one containing malware. For example, figure 3 shows how an application could be repackaged with malware and a consumer could inadvertently download it onto a mobile device. Mobile devices often do not use security software. Many mobile devices do not come preinstalled with security software to protect against malicious applications, spyware, and malware-based attacks. Further, users do not always install security software, in part because mobile devices often do not come preloaded with such software. While such software may slow operations and affect battery life on some mobile devices, without it, the risk may be increased that an attacker could successfully distribute malware such as viruses, Trojans, spyware, and spam, to lure users into revealing passwords or other confidential information. Operating systems may be out-of-date. Security patches or fixes for mobile devices’ operating systems are not always installed on mobile devices in a timely manner. It can take weeks to months before security updates are provided to consumers’ devices. Depending on the nature of the vulnerability, the patching process may be complex and involve many parties. For example, Google develops updates to fix security vulnerabilities in the Android OS, but it is up to device manufacturers to produce a device-specific update incorporating the vulnerability fix, which can take time if there are proprietary modifications to the device’s software. Once a manufacturer produces an update, it is up to each carrier to test it and transmit the updates to consumers’ devices. However, carriers can be delayed in providing the updates because they need time to test whether they interfere with other aspects of the device or the software installed on it. In addition, mobile devices that are older than 2 years may not receive security updates because manufacturers may no longer support these devices. Many manufacturers stop supporting smartphones as soon as 12 to 18 months after their release. Such devices may face increased risk if manufacturers do not develop patches for newly discovered vulnerabilities. Software on mobile devices may be out-of-date. Security patches for third-party applications are not always developed and released in a timely manner. In addition, mobile third-party applications, including web browsers, do not always notify consumers when updates are available. Unlike traditional web browsers, mobile browsers rarely get updates. Using outdated software increases the risk that an attacker may exploit vulnerabilities associated with these devices. Mobile devices often do not limit Internet connections. Many mobile devices do not have firewalls to limit connections. When the device is connected to a wide area network it uses communications ports to connect with other devices and the Internet. These ports are similar to doorways to the device. A hacker could access the mobile device through a port that is not secured. A firewall secures these ports and allows the user to choose what connections he or she wants to allow into the mobile device. The firewall intercepts both incoming and outgoing connection attempts and blocks or permits them based on a list of rules. Without a firewall, the mobile device may be open to intrusion through an unsecured communications port, and an intruder may be able to obtain sensitive information on the device and misuse it. Mobile devices may have unauthorized modifications. The process of modifying a mobile device to remove its limitations so consumers can add additional features (known as “jailbreaking” or “rooting”) changes how security for the device is managed and could increase security risks. Jailbreaking allows users to gain access to the operating system of a device so as to permit the installation of unauthorized software functions and applications and/or to not be tied to a particular wireless carrier. While some users may jailbreak or root their mobile devices specifically to install security enhancements such as firewalls, others may simply be looking for a less expensive or easier way to install desirable applications. In the latter case, users face increased security risks, because they are bypassing the application vetting process established by the manufacturer and thus have less protection against inadvertently installing malware. Further, jailbroken devices may not receive notifications of security updates from the manufacturer and may require extra effort from the user to maintain up-to-date software. Communication channels may be poorly secured. Having communication channels, such as Bluetooth communications, “open” or in “discovery” mode (which allows the device to be seen by other Bluetooth-enabled devices so that connections can be made) could allow an attacker to install malware through that connection, or surreptitiously activate a microphone or camera to eavesdrop on the user. In addition, using unsecured public wireless Internet networks or WiFi spots could allow an attacker to connect to the device and view sensitive information. In addition, connecting to an unsecured WiFi network could allow an attacker to access personal information from a device, putting users at risk for data and identity theft. One type of attack that exploits the WiFi network is known as man-in-the-middle, where an attacker inserts himself in the middle of the communication stream and steals information. For example, figure 4 depicts a man-in-the-middle attack using an unsecured WiFi network. As a result, an attacker within range could connect to a user’s mobile device and access sensitive information. The number and variety of threats aimed at mobile devices combined with the vulnerabilities in the way the devices are configured and used by consumers means that consumers face significant risks that the proper functioning of their devices as well as the sensitive information contained on them could be compromised. Mobile device manufacturers and wireless carriers can implement a number of technical features, such as enabling passwords and encryption, to limit or prevent attacks. In addition, consumers can adopt key practices, such as setting passwords, installing software to combat malware, and limiting the use of public wireless connections for sensitive transactions, which also can significantly mitigate the risk that their devices will be compromised. Table 3 outlines security controls that can be enabled on mobile devices to help protect against common security threats and vulnerabilities. The security controls and practices described are not a comprehensive list, but are consistent with recent studies and guidance from NIST and DHS, as well as recommended practices identified by the FCC CSRIC advisory committee. In addition, security experts, device manufacturers, and wireless carriers agreed that the security controls and practices identified are comprehensive and are in agreement with the lists. Appendix III provides links to federal websites that provide information on mobile security. Organizations may face different issues than individual consumers and thus may need to have more extensive security controls in place. For example, organizations may need additional security controls to protect proprietary and other confidential business data that could be stolen from mobile devices and need to ensure that mobile devices connected to the organization’s network do not threaten the security of the network itself. Table 4 outlines controls that may be appropriate for organizations to implement to protect their networks, users, and mobile devices. In addition to using mobile devices with security controls enabled, consumers can also adopt recommended security practices to mitigate threats and vulnerabilities. Table 5 outlines security practices consumers can adopt to protect the information on their devices. The practices are consistent with guidance from NIST and DHS, as well as recommended practices identified by FCC’s CSRIC advisory committee. Organizations also benefit from establishing security practices for mobile device users. Table 6 outlines additional security practices organizations can take to safeguard mobile devices. Federal agencies and mobile industry companies have taken steps to develop standards for mobile device security and have participated in initiatives to develop and implement certain types of security controls. However, these efforts have been limited in scope, and mobile device manufacturers and carriers do not consistently implement security safeguards on mobile devices. Although FCC has facilitated public-private coordination to address specific challenges, such as cellphone theft, and developed cybersecurity best practices, it has not yet taken similar steps to encourage device manufacturers and wireless carriers to implement a more complete industry baseline of mobile security safeguards. Furthermore, DHS, FTC, NIST, and the private sector have taken steps to raise public awareness about mobile security threats. However, security experts agree that many consumers still do not know how to protect themselves from mobile security vulnerabilities. DHS and NIST have not yet developed performance measures that would allow them to determine whether they are making progress in improving awareness of mobile security issues. Federal agencies and mobile industry companies have worked to develop best practices and taken steps to address certain aspects of mobile security. FCC has worked with mobile companies on several initiatives. For example, FCC tasked its advisory committee, CSRIC, with developing cybersecurity best practices, including recommended practices for wireless and mobile security. In March 2011, CSRIC released its report recommending that wireless carriers and device manufacturers consider adopting practices such as: working closely and regularly with customers to provide recommendations concerning existing default settings and to identify future default settings that may introduce vulnerabilities; employing fraud detection systems to detect customer calling anomalies (e.g., system access from a single user from widely dispersed geographic areas); having processes in place to ensure that all third-party software has been properly patched with the latest security patches and that the system works correctly with those patches installed; establishing application support for cryptography that is based on open and widely reviewed and implemented encryption algorithms and protocols; and enforcing strong passwords for mobile device access and network access. In addition, in March 2012 FCC tasked CSRIC with examining three major cybersecurity threats to networks that allow cyber criminals to access Internet traffic for theft of personal information and intellectual property. In response, CSRIC recommended that wireless carriers (1) use key practices when mitigating botnet threats, (2) use best practices for deploying and managing Domain Name System Security Extensions, and (3) develop an industry framework to prevent Internet route hijacking via security weaknesses in the Border Gateway Protocol.working with the wireless carriers to implement these recommendations and is tasked with developing ways to measure the effectiveness of the recommendations. FCC also tasked the Technological Advisory Council’s Wireless Security and Privacy working group to examine mobile security issues, such as vulnerabilities of WiFi networks, security of older generation cellular networks, malicious applications, and text messaging security. The working group is scheduled to issue its recommendations in December 2012. Moreover, in April 2012, FCC announced that it had reached agreement with the CTIA-the Wireless Association, and multiple wireless carriers to establish processes to deter theft of mobile devices. Under the antitheft agreement, participating wireless carriers are to take several specific actions and submit quarterly progress reports to FCC. For example, the antitheft agreement calls for wireless carriers to initiate, implement, and deploy database solutions by October 31, 2012, to prevent reportedly lost or stolen smartphones from being used on another wireless network. The FCC plans to monitor progress in developing these databases and CTIA agreed to report progress quarterly, beginning June 30, 2012. The agreement also will result in the launch of a public education campaign by July 1, 2012, to inform consumers about the ability to lock or locate and erase data from a smartphone. In addition, wireless carriers and device manufacturers reported that they participate in private-sector standards-setting organizations, which have addressed aspects of mobile security. For example, the Open Mobile Alliance, an industry standards group, has developed a specification to provide a common means for mobile developers to implement standards for secure and reliable data transport between two communicating parties. Furthermore, a consortium of wireless carriers and mobile device manufacturers known as the Messaging, Malware and Mobile Anti-Abuse Working Group has an initiative underway to address text-message-based spam. Under this initiative, wireless carriers encourage customers to forward spam text messages back to the carriers, who can use the messages to identify the source of spam and take corrective action to block its content from their networks. According to FCC officials, the current chairman of the Messaging, Malware and Mobile Anti-Abuse Working Group is a member of CSRIC and the chair of the working group that developed recommended solutions for the botnet threats. While private and public sector entities have initiated activities to identify mobile security safeguards, these safeguards are not always available on mobile devices or activated by users. According to a 2012 study by NQ Mobile and the National Cyber Security Alliance (NCSA), approximately 30 percent of respondents said they did not have mobile security features on their smartphones. In addition, approximately 66 percent of respondents did not report that they activated password protection on their devices to prevent unauthorized access and that at least 67 percent did not report activating a remote-wipe or remote-locate security feature. Security company representatives told us that these results were generally consistent with their experiences and observations. In addition, mobile device manufacturers and wireless carriers do not consistently implement or activate security safeguards on their mobile devices. According to most of the device manufacturers and several wireless carriers we spoke with, safeguards such as passwords, encryption, and remote wipe/lock/locate can be made available on their mobile devices, although one wireless carrier noted that encryption might be inappropriate for certain types of devices. Several of these companies also acknowledged that it is possible to preconfigure mobile devices to prompt the user to implement safeguards when the phone is first set up. However, with the exception of password protection for online voicemail accounts, none of the device manufacturers or wireless carriers stated that they generally configure their devices to prompt the user to implement these controls. We also observed that general cybersecurity instructions were not directly accessible from either carriers or device manufacturers, although instructions for implementing controls could be found by searching the company’s website for information about individual models of smartphones. FCC has the ability to encourage broad implementation of mobile security safeguards among mobile industry companies. While it has taken steps to encourage implementation of safeguards in certain areas, it has not yet taken similar steps to encourage industry implementation of a broad baseline of mobile security safeguards. For example, in its recent antitheft agreement with CTIA, and participating wireless carriers, FCC took an active role in encouraging major wireless carriers to adopt specific procedures to discourage the theft of mobile devices. This effort demonstrates that FCC can facilitate private sector efforts to establish an industry baseline and milestones for addressing mobile security challenges. Moreover, representatives from multiple companies agreed that FCC could play a role in coordinating private sector efforts to improve mobile security. FCC has also facilitated private sector efforts to establish cybersecurity best practices in areas not specific to mobile security. As mentioned previously, FCC tasked CSRIC to review best practices for botnet threats, Domain Name System attacks, and Internet route hijacking. CSRIC developed voluntary recommendations in these areas and has been working with wireless carriers to implement them. According to FCC officials, wireless carriers representing 90 percent of the domestic customer base have committed to adopting and using these practices. Although these recommendations are not specific to mobile devices, FCC officials stated that the process of seeking voluntary compliance from carriers had been successful and demonstrated the willingness of carriers to adopt best practices. FCC officials stated that they hope to have the same cooperation from wireless carriers when the Technological Advisory Council’s Wireless Security and Privacy working group releases its recommendations on mobile security issues, scheduled for December 2012. While it is not clear that the working group will develop a baseline of recommended practices for implementation by mobile industry companies, the council’s recommendations nevertheless could be part of such a baseline. Another candidate for a set of baseline mobile security standards that mobile industry companies could be encouraged to implement is the collection of cybersecurity best practices developed by CSRIC in 2011. Those practices have not yet been adopted as a baseline within the mobile industry. FCC officials from the Public Safety and Homeland Security Bureau stated that they had not yet taken action to promote this specific set of recommended practices, although they had held informal meetings with industry to discuss the implementation of cybersecurity practices. Whether mobile industry companies adopt the CSRIC- recommended practices or choose other baseline practices and controls, it will be important for FCC to encourage industry to adopt recommended practices. If such practices are not implemented, vulnerabilities in mobile devices are likely to continue to pose risks for consumers. Many of the key practices that have been identified as effective in mitigating mobile security risks depend on the active participation of users. Thus it is important that an appropriate level of awareness is achieved among consumers who use mobile devices on a regular basis. To address this need, federal agencies have developed and distributed a variety of educational materials. For example: DHS’s US-Computer Emergency Readiness Team (US-CERT) has developed cybersecurity tip sheets and technical papers related to mobile security. These materials, which are published on the US- CERT website, provide lists of suggestions, such as the use of passwords and encryption, to help consumers to protect their devices and sensitive data from network attacks and theft. DHS coordinates domestic and international engagements and cybersecurity outreach endeavors. For example, as the lead agency for the awareness component of the NICE initiative, DHS coordinates the National Cyber Security Awareness Month and a national cybersecurity public awareness campaign called “Stop. Think. Connect.” As part of these efforts, DHS has developed educational materials that, although not specifically related to mobile security, encourage users to adopt safe practices when using the Internet. The DHS website related to this effort also provides links to educational materials hosted on third-party websites, such as StaySafeOnline.org. FTC manages the OnGuardOnline website, which provides individuals with information about how to use the Internet in a safe, secure, and responsible manner. As part of this effort, FTC has developed educational materials specifically related to mobile security, such as avoiding malicious mobile applications and protecting children who use mobile devices. In addition, FTC and DHS have developed and distributed printed cybersecurity guides to schools, business, and other entities, according to an FTC staff member. NIST published guidelines on the security of cellphones and personal digital assistants in 2008. provides users with information about how to secure their devices. For example, the guidance discusses the value of implementing authentication (e.g., password protection) and remotely erasing or locking devices that are lost or stolen. Among other things, this guidance DHS and nonprofit organizations also have developed and distributed cybersecurity educational materials in collaboration with NCSA. In addition to funding from the private sector, DHS officials stated that DHS has contributed a grant to NCSA to conduct surveys and other activities. NCSA has produced educational materials that specifically relate to mobile security. For example, NCSA’s website provides tips that individuals can follow to protect their mobile devices such as avoiding malware, using trusted internet connections, and securing personal information through the use of strong passwords. NIST, Special Publication 800-124, Guidelines on Cell Phone and PDA Security (October 2008). According to NIST officials, they are revising this publication and will release a draft update in fiscal year 2012. mobile phone theft, (2) spam and mobile phones, and (3) computer viruses and mobile phones. Similarly, CTIA maintains a blog with information on topics such as establishing passwords and using applications that can track, locate, lock, and/or wipe wireless devices that are lost or stolen. In addition, as part of the antitheft initiative discussed above, CTIA agreed that its members would implement a system to inform users about security safeguards on mobile devices as well as launch an education campaign regarding the safe use of smartphones. Despite the efforts underway by the federal government and the private sector to develop and distribute educational materials, it is unclear whether consumer awareness has improved as a result. Representatives from companies that specialize in information security told us that many consumers do not understand the importance of implementing mobile security safeguards or do not know how to implement them. Their views are consistent with the results of the 2012 NCSA study, which suggested that many mobile users do not know how to implement mobile security safeguards. The survey reported that more than half of respondents felt that they required additional information in order to select and/or implement security solutions for their mobile devices. Further, the study reported that approximately three-quarters of respondents reported that they did not receive information about the need for security solutions at the time they purchased their phone. The survey did not include data that would indicate whether consumer awareness had improved or worsened over time. While DHS and NIST have conducted or supported several consumer cybersecurity awareness efforts, neither has developed outcome-oriented performance measures to assess the effectiveness of government efforts to enhance consumer awareness of mobile security. An outcome-oriented performance measure is an assessment of the result, effect, or consequence that will occur from carrying out a program or activity compared to its intended purpose. NIST officials stated that they do not currently measure progress related to awareness activities associated with NICE. Furthermore, although DHS officials stated that the department assesses the effectiveness of several of the awareness activities, these assessments are not based on outcome-oriented measures. For example, DHS officials stated that they assess the “Stop. Think. Connect.” events by (1) the number of individuals who join the campaign and agree to receive additional information, such as newsletters, concerning cybersecurity; (2) the total number of events held; (3) the number of agencies and states that join the campaign; and (4) the number of times the campaign website is visited. However, these measures are not outcome-oriented because they do not indicate how, if at all, these activities have (1) improved citizens’ knowledge about managing online risk, (2) improved knowledge of cybersecurity within organizations, or (3) enabled access to cybersecurity resources. To develop measures of the impact of government efforts on consumer awareness of mobile security issues, a baseline measure of consumer awareness would be needed from which to mark progress. However, neither DHS nor NIST has developed a baseline measure of the state of national cybersecurity awareness. Establishing a baseline measure and regularly assessing consumer awareness and behavior regarding a particular issue can enable organizations to document where problems exist, identify causes, prioritize efforts, and monitor progress. DHS officials stated that the department has considered conducting a study on consumer behavior and awareness related to general cybersecurity but has not yet done so. Without a baseline measure of consumer awareness, it will remain difficult for NIST and DHS to measure any correlation between the government’s activities and enhanced consumer awareness. Further, without outcome-oriented performance measures, the government will be limited in its ability to determine whether it is achieving its identified goals and objectives, including whether cybersecurity awareness efforts are effective at increasing adoption of recommended security practices. Mobile devices face an array of threats that take advantage of numerous vulnerabilities commonly found in such devices. These vulnerabilities can be the result of inadequate technical controls, but they can also result from the poor security practices of consumers. Private sector entities and relevant federal agencies have taken steps to improve the security of mobile devices, including making certain controls available for consumers to use if they wish and promulgating information about recommended mobile security practices. However, security controls are not always consistently implemented on mobile devices, and it is unclear whether consumers are aware of the importance of enabling security controls on their devices and adopting recommended practices. Although FCC has taken steps to work with industry to develop cybersecurity best practices, it has not yet taken steps to encourage wireless carriers and device manufacturers to implement a more complete industry baseline of mobile security safeguards, and NIST and DHS have not determined whether consumer awareness of mobile security issues has improved since the government’s efforts have been initiated. To help mitigate vulnerabilities in mobile devices, we recommend that the Chairman of the Federal Communications Commission continue to work with wireless carriers and device manufacturers on implementing cybersecurity best practices by encouraging them to implement a complete industry baseline of mobile security safeguards based on commonly accepted security features and practices; and monitor progress of wireless carriers and device manufacturers in achieving their milestones and time frames once an industry baseline of mobile security safeguards has been implemented. To determine whether the NICE initiative is having a beneficial effect in enhancing consumer awareness of mobile security issues, we recommend that the Secretary of Homeland Security in collaboration with the Secretary of Commerce establish a baseline measure of consumer awareness and behavior related to mobile security and develop performance measures that use the awareness baseline to assess the effectiveness of the awareness component of the NICE initiative. We received written comments on a draft of this report from the Chief of FCC’s Public Safety and Homeland Security Bureau, the Director of DHS’s Departmental GAO-OIG Liaison Office, and the Acting Secretary of Commerce. These officials generally concurred with our recommendations and provided technical comments, which we have considered and incorporated as appropriate into the final report. FTC did not provide written comments on the draft report, but an attorney in FTC’s Office of the General Counsel did provide technical comments in an e-mail that we addressed as appropriate. DOD did not provide comments on the draft report. The comments we received are summarized below. In addition to FCC’s written comments, the Chief of FCC’s Public Safety and Homeland Security Bureau stated in e-mail comments that the commission generally concurred with our recommendations that it encourage wireless carriers and device manufacturers to implement a complete industry baseline of mobile security safeguards; and to monitor progress of wireless carriers and device manufacturers in achieving their milestones and time frames once a baseline has been implemented. In the written comments, the Chief added that FCC has facilitated private sector efforts, for example, through advisory committees such as CSRIC to establish and promote the implementation of cybersecurity best practices that secure the underlying Internet infrastructure. FCC officials also provided preliminary oral and written technical comments, which we addressed as appropriate (FCC’s written comments are reprinted in app. II). The Director of DHS’s Departmental GAO-OIG Liaison Office provided written comments in which the department concurred with our recommendations that it work with Commerce to establish a baseline measure of consumer awareness and behavior related to mobile security and develop performance measures that use the baseline to assess the effectiveness of the awareness component of the NICE initiative. He stated that the department will coordinate with its counterparts at Commerce to assess the feasibility of different methods to create a baseline measure of consumer awareness and continue to promote initiatives to educate the public about cybersecurity. He also stated that the department will coordinate with its NIST counterparts on the development of performance measures using the awareness campaign and other methods. He also provided technical comments, which we have incorporated as appropriate (DHS’s comments are reprinted in app. III). The Acting Secretary of Commerce provided written comments in which the department concurred in principle with our recommendations that NIST work with DHS to establish a baseline measure of consumer awareness and behavior related to mobile security and that it develop performance measures that use the baseline to assess the effectiveness of the awareness component of the NICE initiative. The Acting Secretary provided technical comments and asked that we consider replacing “baseline understanding” with “baseline measure,” which we have incorporated into the final report. She also provided suggested revised text. However, we believe that the information in the draft is correct and communicates appropriately as written. Therefore, we have not added the suggested text (Commerce’s comments are reprinted in app. IV). We are sending copies of this report to the appropriate congressional committees; the Chairmen of the Federal Communications Commission and Federal Trade Commission; the Secretaries of Commerce, Defense, and Homeland Security; and other interested congressional parties. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact: Gregory C. Wilshusen at (202) 512-6244 or Dr. Nabajyoti Barkakati at (202) 512-4499, or by e-mail at wilshuseng@gao.gov or barkakatin@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. The objectives of our review were to determine: (1) what common security threats and vulnerabilities currently exist in mobile devices, (2) what security features are currently available and what practices have been identified to mitigate the risks associated with these vulnerabilities, and (3) the extent to which government and private entities are addressing security vulnerabilities of mobile devices. To determine the common security threats and vulnerabilities that currently exist in mobile devices (cellphones, smartphones, and tablets), as well as security features and practices to mitigate them, we identified agencies and private companies with responsibilities in the telecommunication and cybersecurity arena, and reviewed and analyzed information security-related websites, white papers, and mobile security studies. We interviewed officials, and obtained and analyzed documentation from the Federal Communications Commission (FCC), Department of Homeland Security (DHS), Department of Defense (DOD), Department of Commerce (Commerce), and Federal Trade Commission (FTC) to determine the extent to which they have identified mobile security vulnerabilities and developed standards and guidance on the security of mobile devices. We interviewed and obtained documents from an industry group and an advisory council, both of which have representation from the telecommunication industry; these included the CTIA-The Wireless Association and the Communications Security, Reliability, and Interoperability Council (CSRIC). We also analyzed information from the US-Computer Emergency Readiness Team (US- CERT) and the National Vulnerability Database on mobile security vulnerabilities. Further, we obtained input from the private companies who make up the largest market share for mobile devices in the United States to determine what steps they are taking to provide security for their mobile devices. These included mobile device manufacturers—HTC Corporation, Research In Motion, Corp, Motorola Mobility Inc., Samsung, and LG Electronics—as well as wireless carriers—Verizon Wireless, AT&T Inc., T-Mobile USA Inc., and Sprint. We also met with representatives of information security companies, including Symantec Corporation and Juniper Networks. We approached Apple Inc. and Google Inc.; however, Apple officials did not agree to meet with us and Google officials did not provide responses to our questions. We developed draft lists of common vulnerabilities and security practices based on our analysis of government security guidance as well as private sector studies and reports. We provided copies of these lists to each of the companies listed above and addressed their comments as appropriate. To determine the extent to which government and private entities are addressing security vulnerabilities of mobile devices, we analyzed statutes and regulations to determine federal roles related to mobile security. In order to identify initiatives related to improving mobile security or raising consumer awareness, we interviewed the federal and private sector officials mentioned above, and members of a private sector working group devoted to mobile security issues, known as the Messaging, Malware, and Mobile Anti-Abuse Working Group. In addition, we analyzed multiple studies related to consumer attitudes and practices related to mobile devices. Specifically, we assessed available methodological information against general criteria for survey quality and relevant principles derived from the Office of Management and Budget (OMB) Standards and Guidelines for Statistical Surveys. Because the available methodological documentation did not allow us to fully assess the quality of the survey data, the risk of error in the surveys makes it possible that reported results may not be very accurate or precise. Although we corroborated the study’s general findings with information security experts, readers should be cautious in drawing conclusions based on these results. We conducted this performance audit from November 2011 to September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The table below provides information and website links to federal sites that include information related to mobile security. Website links are current as of July 10, 2012. In addition to the individuals named above, key contributions to this report were made by John de Ferrari (Assistant Director), West E. Coile, Neil J. Doherty, Rebecca E. Eyler, Richard J. Hagerman, Tammi N. Kalugdan, David F. Plocher, Carl M. Ramirez, Meredith R. Raymond, and Brandon C. Sanders.
Millions of Americans currently use mobile devices—e.g., cellphones, smartphones, and tablet computers—on a daily basis to communicate, obtain Internet-based information, and share their own information, photographs, and videos. Given the extent of consumer reliance on mobile interactions, it is increasingly important that these devices be secured from expanding threats to the confidentiality, integrity, and availability of the information they maintain and share. Accordingly, GAO was asked to determine (1) what common security threats and vulnerabilities affect mobile devices, (2) what security features and practices have been identified to mitigate the risks associated with these vulnerabilities, and (3) the extent to which government and private entities have been addressing the security vulnerabilities of mobile devices. To do so, GAO analyzed publically available mobile security reports, surveys related to consumer cybersecurity practices, as well as statutes, regulations, and agency policies; GAO also interviewed representatives from federal agencies and private companies with responsibilities in telecommunications and cybersecurity. Threats to the security of mobile devices and the information they store and process have been increasing significantly. For example, the number of variants of malicious software, known as “malware,” aimed at mobile devices has reportedly risen from about 14,000 to 40,000 or about 185 percent in less than a year (see figure). Cyber criminals may use a variety of attack methods, including intercepting data as they are transmitted to and from mobile devices and inserting malicious code into software applications to gain access to users’ sensitive information. These threats and attacks are facilitated by vulnerabilities in the design and configuration of mobile devices, as well as the ways consumers use them. Common vulnerabilities include a failure to enable password protection and operating systems that are not kept up to date with the latest security patches. Mobile device manufacturers and wireless carriers can implement technical features, such as enabling passwords and encryption to limit or prevent attacks. In addition, consumers can adopt key practices, such as setting passwords and limiting the use of public wireless connections for sensitive transactions, which can significantly mitigate the risk that their devices will be compromised. Federal agencies and private companies have promoted secure technologies and practices through standards and public private partnerships. Despite these efforts, safeguards have not been consistently implemented. Although the Federal Communications Commission (FCC) has facilitated public-private coordination to address specific challenges such as cellphone theft, it has not yet taken similar steps to encourage device manufacturers and wireless carriers to implement a more complete industry baseline of mobile security safeguards. In addition, many consumers still do not know how to protect themselves from mobile security vulnerabilities, raising questions about the effectiveness of public awareness efforts. The Department of Homeland Security (DHS) and National Institute of Standards and Technology (NIST) have not yet developed performance measures or a baseline understanding of the current state of national cybersecurity awareness that would help them determine whether public awareness efforts are achieving stated goals and objectives. GAO recommends that FCC encourage the private sector to implement a broad, industry-defined baseline of mobile security safeguards. GAO also recommends that DHS and NIST take steps to better measure progress in raising national cybersecurity awareness. The FCC, DHS, and NIST generally concurred with GAO’s recommendations.
Express Mail, the Service’s premium service, was first offered in 1970 and is designed to provide overnight delivery for documents and packages weighing up to 70 pounds, which are to be tracked from the points of acceptance to points of delivery. It is the Service’s only guaranteed delivery service, and customers may request and receive a postage refund if an Express Mail package is not delivered on time. As of July 1996, the minimum postage for mailing an Express Mail package was $10.75. Overall, Express Mail represents a relatively small portion of the Service’s total mail volume and revenue. For fiscal year 1995, the Service reported Express Mail volume of 56 million pieces, which generated revenue of about $711 million, or about 1 percent of the Service’s total mail volume and postage revenue that year. The Postal Service began offering EMCAs in 1984 to make Express Mail more attractive to customers by giving them a more convenient way to pay postage. Around that time, the Postal Service took other steps as well to retain Express Mail customers. For example, the Postal Service’s 1986 annual report to Congress shows that after Express Mail volume dropped by 8.7 percent between fiscal years 1985 and 1986, it “. . . moved aggressively to stop the decline and to make Express Mail service more competitive.” According to the 1986 report, the Postal Service implemented an Express Mail morning-delivery program in 30 cities, placed 10,000 Express Mail collection boxes on the streets, and introduced a new Express Mail letter envelope in 1986. During fiscal year 1995, customers used EMCAs to pay about $139 million in postage on about 8 million Express Mail packages, or 13 percent and 16 percent of the Service’s total Express Mail volume and revenue, respectively. About 90 percent of all EMCA transactions were for domestic Express Mail, and the balance for international Express Mail. In addition to EMCAs, Express Mail customers can pay postage with cash, checks, and postage meters. Recently, the Postal Service has begun making debit and credit cards increasingly available for use by Express Mail customers and other postal customers. The Service’s Vice President for Marketing Systems, under the Senior Vice President for Marketing, has overall responsibility for Express Mail procedures and management oversight. Employees at post offices and mail-processing plants where Express Mail is accepted from customers and prepared for delivery are responsible for implementing the Service’s EMCA policy and procedures. In recent years, the House Subcommittee on the Postal Service, the U.S. Postal Service, and we have received allegations of fraudulent schemes to evade payment of postage. In addition, we have reported serious weaknesses in some of the Service’s revenue systems. In 1993, we reported weak controls over postage meters after allegations of postage meter fraud and a statement by the Postmaster General, which related that revenue losses could total $100 million annually. More recently, we reported a lack of adequate procedures for accepting bulk mail, for which the Service recorded revenue of about $23 billion in 1994. In response to allegations and our reports, the Service took numerous actions to improve its systems of controls over postage meters and bulk mail acceptance. Since that time, we received the allegation that mailers were abusing EMCAs. Our objectives were to determine (1) whether there is any basis for an allegation regarding EMCA abuse and (2), if so, what steps the Service is taking and could take to help avoid or minimize EMCA revenue losses. To review alleged EMCA abuse, we interviewed various Service officials at headquarters offices in Washington, D.C., and reviewed Servicewide EMCA policies, procedures, and internal controls for opening EMCAs, verifying EMCA numbers presented by customers, closing EMCAs with negative balances, and recording all required Express Mail data when packages are accepted. To ascertain whether procedures and controls were adequate to protect EMCA revenue and were being followed, we reviewed pertinent Postal Service policies, procedures, and forms for EMCA operations and discussed Express Mail and EMCA practices with Service officials in three customer service districts (Dallas, TX; New York, NY; and Van Nuys, CA). We selected the New York and Van Nuys districts because they were among those having the largest number of EMCA transactions. We selected the Dallas district to provide broader geographic coverage of the Service’s EMCA activities. To help determine if use of EMCAs had resulted in revenue losses, we reviewed, but did not verify, various management reports relating to EMCA activities generated from the Service’s Electronic Marketing and Reporting System (EMRS). These reports provided data on (1) invalid EMCAs accepted by the Service, (2) EMCAs with negative fund balances, and (3) Express Mail packages delivered by the Service with no acceptance data recorded. For the three selected districts, we gathered data on the dollar amounts of the EMCA negative balances that existed for at least five consecutive accounting periods. We scanned some Express Mail labels in all three districts to determine if the Postal Service accepted Express Mail packages from EMCA customers and did not record any acceptance data. We reviewed data provided by the Service’s collection agency on the amount of EMCA-related postage lost due to invalid EMCAs. We reviewed relevant portions of all 19 Postal Inspection Service reports that addressed EMCA activities in various districts, including two of the three selected districts. To help determine what recent actions, if any, the Service had taken or planned to take relating to EMCAs, we interviewed various headquarters officials responsible for EMCA procedures and controls and for providing employees with equipment that could help to strengthen EMCA-related controls. We also discussed EMCA procedures with officials at the Service’s area offices in Dallas, TX and Memphis, TN. At the Memphis office, we inquired about a recently developed EMCA self-audit guide, which was to be used by all districts. To determine what actions the Service might take to reduce EMCA losses, we interviewed various headquarters officials and reviewed various Service reports showing the purpose to be achieved with EMCAs, Express Mail volumes, and related data after the Service introduced EMCAs. We also interviewed account representatives for two of the Service’s principal competitors for overnight delivery—Federal Express and United Parcel Service. We determined if these competitors offered corporate accounts to customers and, if so, what they required for opening an account. The Postal Service provided written comments on a draft of this report. The Service’s comments are summarized and evaluated beginning on page 17 and included in appendix II. We did our work from November 1995 through April 1996 in accordance with generally accepted government auditing standards. EMCA procedures have not adequately protected the Service against postage revenue losses, and EMCA customers have sometimes obtained Express Mail services without valid EMCAs. Postal Service reports showed that the EMCAs were invalid because the EMCA numbers used by customers did not match any of the Service’s valid numbers. Also, although EMCAs are to always contain sufficient funds to cover Express Mail postage, EMCA customers sometimes overdrew their accounts and accumulated large negative account balances. The Service lost increasing sums of Express Mail revenue in the past 3 years because of weak internal controls over EMCAs. Nationwide, the Service referred about $966,000 in delinquent EMCAs to its collection agency in fiscal year 1995. Of that amount, the Service recovered about $165,000 (17 percent), and the balance of $801,000 was written off as uncollectible, almost twice (90 percent increase) the amount written off in 1993, as figure 1 shows. Postal Service reports show that its employees accepted and delivered some Express Mail packages with invalid EMCA numbers. After delivering the packages, the Service determined that EMCA numbers provided by customers did not match any of the valid EMCA numbers in the Service’s automated system. The Service lost revenue and incurred administrative cost to follow up on these customers because it had not determined that their EMCA numbers were invalid before accepting and delivering Express Mail packages. To help employees detect invalid EMCA numbers before accepting Express Mail, the Service includes, as part of a “Fraud Alert” in a biweekly Postal Bulletin distributed within the Service, a list of EMCA numbers that it has determined to be invalid after some prior EMCA action (e.g., it had previously closed the account). Employees are instructed to not accept Express Mail packages bearing any of the invalid numbers. When the packages are accepted at a post office or a mail-processing plant, employees are to check EMCA numbers manually against the biweekly list of invalid numbers. Various Service officials told us that employees accepting Express Mail with EMCA payment do not always use the bulletins to check for invalid EMCAs. Employees at mail-processing plants are expected to move huge volumes of mail in a few hours, and Postal Service officials said that, due to time pressures, most of the EMCA problems occur as a result of improper acceptance of Express Mail at processing plants. A manual process of checking for invalid EMCAs can take a considerable amount of time because of the large quantity of invalid numbers to be scanned for each EMCA package (e.g., the Postal Bulletin dated June 20, 1996, contained about 2,900 invalid 6-digit EMCA numbers listed in numeric order). Employees accepting Express Mail packages at post offices and mail-processing plants have access to and are to use only the list of invalid EMCA numbers to verify that customers are presenting valid EMCA numbers. Therefore, if a customer made up a number, it likely would not be on the Service’s list of invalid EMCAs. Postal employees at post offices and processing plants do not have automated access to valid EMCA numbers—which totaled about 113,000 in February 1996. The Postal Service incurred administrative costs to collect postage from some EMCA customers using invalid EMCA numbers after the Postal Service delivered Express Mail packages. Each of the three selected districts we visited had 4 to 13 employees responsible for domestic and international Express Mail and Priority Mail activities. Service officials said that all districts have employees with similar responsibilities. District officials told us that these employees receive reports each workday showing EMCA errors that must be investigated so postage can be collected. These administrative actions can be time consuming and costly because they entail obtaining copies of mailing labels, verifying data, and recording new data when a valid EMCA can be charged. When the EMCA number appears to be invalid, i.e., does not match the Service’s records of valid EMCA numbers, the employees must further investigate each case through telephone calls or letters asking for reimbursement and requesting mailers to stop using invalid accounts. Some customers continued to use EMCAs although they had insufficient funds in their accounts to cover charges for Express Mail services that they received—a problem that the Inspection Service reported over several years. Under current Service procedures, customers must maintain a minimum EMCA balance of either the customer’s estimated Express Mail postage for 1 week or $50, whichever is higher. However, employees do not have the necessary EMCA data access to verify that this requirement is met before accepting Express Mail packages. Some EMCA customers overdrew their EMCA accounts, and the Postal Service continued to accept Express Mail packages from these customers. When EMCA customers overdraw their accounts, Postal Service procedures require that employees contact individuals and businesses to collect the postage due. A letter is to be sent to the EMCA customer when the account is deficient for one postal accounting period (28 days). If the account remains deficient after 3 postal accounting periods (84 days), the Service is to close the account and refer it to a collection agency used by the Service. However, the Service has little information from EMCA applications to use in locating customers and collecting postage. Under current Service procedures, an individual or corporation is to be approved for an EMCA after completing a one-half page application, which shows the applicant’s name, address, and telephone number, and depositing the minimum money required in the account. The Service does not require the applicant to present any identification, such as a driver’s license or major credit card, to receive an EMCA. Employees approving EMCA applications are not required to verify any information presented on the applications. Thus, an EMCA applicant could provide false or erroneous information on the application and, in these instances, efforts by the Postal Service and its collection agency to locate the customers and collect postage on the basis of information in the EMCA application likely would be unsuccessful. A Service report on EMCA operations for February 1996 showed that about 97,000 of the approximately 113,000 EMCAs (or 86 percent) had money on deposit with the Service totaling $18.5 million. However, for the remaining 14 percent, or about 16,000 EMCAs, there was no money on deposit; rather, the accounts were overdrawn by $4.3 million. According to the Service’s management reports on Express Mail operations, many EMCAs had large negative balances for periods exceeding three accounting periods and were not closed or sent to the collection agency. For example, in the New York district, 16 of the 27 EMCAs we reviewed had negative balances for about 5 consecutive accounting periods (about 140 days). Of these 16 EMCAs, 10 had negative balances of more than $2,000 each, at the time of our review; and the negative balance for one account was about $10,000. Similarly, in the Van Nuys district, 10 of the 14 EMCAs we reviewed had negative balances for 5 consecutive accounting periods, and the negative balances for 8 accounts were about $3,000 each. In the Dallas district, 3 of the 12 EMCAs we reviewed had negative balances for 5 consecutive accounting periods, including 1 EMCA with a $8,800 negative balance. The Service’s practice of allowing postage to remain unpaid for Express Mail services over long periods of time is inconsistent with the Service policy, which requires that Express Mail must be prepaid or paid at the time of mailing. Further, allowing customers to overdraw EMCAs and maintain active EMCAs with negative balances for periods exceeding three accounting periods violates Postal Service procedures. The Postal Inspection Service has conducted financial audits that included a review of controls over EMCA operations. Postal inspectors in the New York district reported finding overdrawn EMCAs during five audits done since 1987. Some audits revealed that the total negative EMCA balances for the district exceeded $600,000. The inspectors reported that the Van Nuys district had EMCAs with negative balances at various times since 1988. For example, in 1994, the district had EMCA accounts with negative balances totaling about $122,000. As a result of financial audits, the Inspection Service also reported EMCAs with negative balances in many districts that we did not visit. In reports on districts with negative EMCA balances, the Inspection Service recommended that local management take action to eliminate such balances. Along with not verifying some EMCAs, Service employees at times did not make any record of accepting Express Mail packages that the Service processed and delivered. In these instances, the necessary information was not available to respond to customer inquiries about the status of packages and process requests for postage refunds when customers claimed that packages were delivered late. Also, in cases where EMCAs were to be charged, the Service lost some revenue because of the lack of acceptance data. Employees receiving Express Mail packages, whether EMCAs are used or not, are to electronically scan a barcode on the mailing labels to record data for tracking and reporting purposes. When the packages include EMCA numbers, employees are to record those numbers so that the Service can charge postage to the EMCA. Postal Service reports showed that, for the 12-month period ending February 1996, it delivered about 1.9 million domestic Express Mail packages, or 3.4 percent of total domestic Express Mail volume, for which the Service did not record any required acceptance data. Service officials in the three districts we visited said that recording Express Mail acceptance can be a problem when customers drop packages in collection boxes and employees are expected to record acceptance data when the packages arrive at a mail-processing plant. According to these officials, pressures to keep the mail moving and meet scheduled deadlines can result in some Express Mail being received, sorted, and delivered without proper acceptance. Service officials at headquarters and in the districts we visited routinely receive exception reports showing that Express Mail was delivered but not properly accepted. They said that generally no attempt is made to correct these errors or collect the postage due in cases where EMCAs are used. Specifically, district officials said that they were instructed by Service headquarters not to take any action in these cases. They also said that they did not have the employees needed to do follow-up, even if it were required. When the Service failed to record acceptance of Express Mail packages, it did not have data needed to respond to customers’ inquiries about the status of Express Mail packages. Because the packages were not logged in, the Service had no record to show when packages were received. The Service needs such data to verify whether Express Mail customers’ claims for postage refunds on late deliveries are valid. The Service guarantees that Express Mail packages will be delivered on time. In fiscal year 1995, the Service refunded postage to Express Mail customers totaling about $1.5 million. We did not determine if it had adequate data for determining whether the refund claims were valid. However, if the Service lacks data on when a package was accepted for delivery, it cannot determine whether the package was delivered on time or whether it was delivered late. Further, the Service regularly reports on-time delivery rates for Express Mail on the basis of the data that are to be recorded when packages are accepted and delivered. When acceptance data are not recorded, the Service has incomplete data to report on-time delivery rates for Express Mail. The Service lost unknown amounts of revenue because some customers had included EMCA numbers on Express Mail packages, but Service employees did not record any acceptance data. We scanned some Express Mail labels in the three selected districts and noted that all three had received some Express Mail packages from EMCA customers without recording acceptance data. In all three districts, the practice was to not follow up when customers used EMCAs; therefore, no Express Mail acceptance data were recorded. Postal Service officials in the three districts and at headquarters did not know the extent of EMCA revenue losses associated with the failure to record Express Mail acceptance data. We identified two Postal Service actions under way that could help to improve EMCA controls and thereby reduce related revenue losses and provide needed EMCA data. However, these actions were not fully implemented at the time of our review, and the actions do not address some EMCA control weaknesses that we identified. Recognizing the Postal Service’s overall vulnerability to revenue losses, in 1994, the Senior Vice President for Finance established a new revenue assurance unit to help collect revenue owed to the Service. The new unit targeted EMCAs as one of five Postal Service operations for improvement. The unit developed strategies, such as self-audits of EMCA activities, to reduce revenue losses resulting from EMCAs. At the time of our review, the strategies had not yet been fully implemented; and no results from the self-audits, or the unit’s other EMCA-related efforts, were available for our review. In addition to the above action, the Postal Service was installing “point-of-service” terminals at post offices to provide employees with improved access to current postage rates and certain other automated data maintained by the Postal Service. According to the headquarters manager responsible for the point-of-service terminal project, eventually, the terminals are to provide access to the EMCA database and thus enable employees to verify EMCA numbers and fund balances before accepting Express Mail packages. He said that the date and additional cost to provide this access are yet to be determined. The Service did not plan to provide the terminals to employees in mail-processing plants who accept Express Mail packages. These employees will still lack access to valid EMCA numbers and current fund balances, and the Service will continue to be vulnerable to revenue losses when customers drop Express Mail packages in collection boxes and include invalid EMCA numbers on the packages. Although completion of Service actions discussed above should help to improve controls over EMCAs and reduce related revenue losses, control weaknesses will remain. Taking additional steps to better ensure compliance with existing controls, as well as adding controls, can help to protect revenue. But, the Postal Service will incur cost to strengthen internal controls over EMCAs. Given this and other factors, such as changes that have occurred in the overnight mail delivery market and new methods of providing customer convenience, a reasonable step would be for the Postal Service to first ensure that it wants to retain EMCAs before incurring substantial, additional costs to improve related controls. The Postal Service introduced EMCAs in 1984 to help stem the decline in the growth of Express Mail business and become more competitive. As we previously reported, since that time, private carriers have dominated the expedited (overnight) delivery market. We reported that Federal Express is the acknowledged leader in this market and that the Postal Service’s share of the market declined from 100 percent in 1971 to 12 percent in 1990. Recognizing these market realities, in recent years, the Postal Service has focused marketing efforts more on Priority Mail—which generally is to be delivered in 2 or 3 days—than overnight Express Mail. Priority Mail accounted for almost 6 percent of total revenue in fiscal year 1995, compared with just over 1 percent for Express Mail. Unlike Express Mail, the Postal Service does not offer a corporate account for Priority Mail, and the annual growth rate of Priority Mail pieces outpaced Express Mail growth over each of the past 5 fiscal years. (See figure 2.) Other factors also suggest that EMCAs may not be the most cost-effective method of offering payment convenience. Specifically, in 1994, the Postal Service began offering customers the use of major debit or credit cards (e.g., MasterCard, Visa, or American Express) to pay for various mail services at post offices. Customers who want to drop Express Mail packages in collection boxes currently have the option of using postage meters to pay postage. Thus, as one step toward addressing EMCA control problems, the Postal Service could compare the relative customer convenience, administrative cost, and risk of revenue losses of EMCAs with alternative payment methods currently available to Express Mail customers. The Postal Service could also consider competitors’ current customer service practices. On the basis of our limited inquiry, we found some of the Postal Service’s competitors (i.e., Federal Express and United Parcel Service) offer corporate accounts to customers. For example, Federal Express offers customers a “FedEx” account and requires that applicants have a major credit card to qualify for an account. If the Postal Service determines that EMCAs are necessary or desirable, we identified two additional steps, beyond those now planned and under way, to help minimize the risk of EMCA abuse and revenue losses as discussed below. First, while the self-audits proposed by the revenue assurance unit could help to improve compliance, the audits were just getting started at the time of our review. Express Mail packages can be accepted at about 40,000 post offices and several hundred mail-processing plants, and self-audits covering all of these entities will take some time to complete. Postal Service headquarters responsible for Express Mail operations could reinforce the need for managers and employees to comply with existing internal procedures and controls designed to prevent EMCA abuse. These procedures require employees to (1) record all required data from Express Mail labels, (2) verify EMCA numbers presented by customers against lists of invalid EMCA numbers, and (3) close EMCAs with negative balances running more than three postal accounting periods. Second, the Postal Service could improve EMCA internal controls by imposing more stringent requirements for opening EMCAs, such as requiring that individuals present a valid driver’s license, a valid major credit card, or other appropriate identification to receive an EMCA. If Postal Service employees approving EMCAs are required to record information from such sources about EMCA applicants, such information could be useful to the Service and its collection agency to locate and collect postage from customers with overdrawn and closed EMCAs. Internal controls over EMCAs are weak or nonexistent, which has resulted in potential for abuse and increasing revenue losses over the past 3 fiscal years. Establishing adequate control over EMCA operations will require management attention and additional dollar investments. In light of the control problems we identified, overnight mail market developments since 1984, and the increased availability of other payment methods, EMCAs may not be the most cost-effective method of providing a convenient method for paying Express Mail postage. This question requires further evaluation by the Postal Service of all the relevant factors. If EMCAs are necessary or desirable, the Postal Service can take steps beyond those planned and under way to help minimize revenue losses and other problems associated with EMCAs. Some employees did not always comply with existing EMCA procedures for checking EMCAs numbers and recording Express Mail data. Although acceptance employees are under pressure to move the mail and some have side stepped some required tasks, management could emphasize to these employees the importance of following EMCA procedures and collecting the postage due when the Postal Service delivers mail. Further, the Postal Service violated its procedures by allowing customers to overdraw EMCAs and continue using them for up to 5 months. Currently, few requirements exist for customers to obtain EMCAs; and more stringent requirements for opening EMCAs, similar to those used by the Service’s competitors, might also help to avoid Express Mail revenue losses. To help reduce EMCA revenue losses and other related problems discussed in this report, we recommend that the Postmaster General require Service executives to determine if EMCAs are the most cost-effective method for achieving the purpose for which they were intended, in light of all relevant factors. If EMCAs are determined to be a necessary or desirable method, we recommend that the Service (1) establish stronger requirements for opening EMCAs and (2) hold managers and employees accountable for handling EMCA transactions in accordance with the new requirements as well as existing Service policies and procedures for verifying EMCA numbers, closing EMCAs with negative balances, and recording required data for all Express Mail packages accepted. In a September 9, 1996, letter, the Postmaster General said that the Postal Service agreed with our overall findings and conclusions. He said that the Service was moving forward with initiatives to cut down on revenue losses from invalid EMCAs. In addition to the two actions discussed previously in our report, he said that the Service will take the following actions to address our recommendations: Establish more stringent requirements for opening and using EMCAs. These requirements will include a $250 deposit (in lieu of a $100 deposit now required) to open an account and weekly reviews at acceptance units of EMCA use to ensure that minimum balance requirements are met. Area and district managers will focus more consistent attention on ensuring that acceptance units follow EMCA procedures. Examine the feasibility and cost of installing terminals at mail-processing plants in addition to the terminals being installed at many post offices to check instantly whether EMCAs are valid and contain sufficient funds for Express Mail postage. Evaluate whether continuing to offer EMCAs as a payment option still makes good business sense. The Service expects these corrective actions to go a long way toward minimizing the use of invalid EMCAs and revenue losses. We agree that, when the Service has fully implemented the actions taken and planned, controls over EMCA are likely to be significantly improved. The Service will need to coordinate these EMCA improvement actions with its evaluation to determine whether to continue offering EMCAs. Otherwise, it could incur unnecessary cost of improving controls over EMCAs if later it determines that EMCAs do not make good business sense and should be discontinued. We are sending copies of this report to the Postmaster General, the Postal Service Board of Governors, the Ranking Minority Member of your Subcommittee, the Chairman and Ranking Minority Member of the Senate Oversight Committee for the Postal Service, and other congressional committees that have responsibilities for Postal Service issues. Copies will also be made available to others upon request. The major contributors to this report are listed in appendix III. If you have any questions about this report, please call me on (202) 512-8387. EMCAs are available to both individual and business customers. Under current Service procedures, anyone can open an EMCA by depositing $100 or the customer’s estimated 2-weeks Express Mail postage, whichever is higher. EMCA customers are required to maintain a minimum balance of $50 or 1-week Express Mail postage, whichever is higher, on deposit with the Service. Although Service officials said that the number of active EMCA varies daily, Service records show that, during the month of February 1996, an average of about 113,000 EMCAs existed nationwide. When opening EMCAs, customers are to be given a six-digit EMCA number, and these numbers are to be included on mailing labels affixed to the Express Mail packages. Customers can drop the package in a collection box designated for Express Mail or take the package to a post office, mail-processing plant, or other places where the Service accepts Express Mail. Postmasters, clerks, or other Service employees accepting Express Mail at post offices and mail-processing plants are to electronically scan a preprinted barcode on the Express Mail label, which enters the label’s unique identifying number into an automated system for tracking purposes. The employees are to weigh the package, verify that the customer calculated the correct postage, and take steps as required to ensure the correct postage is collected. These steps are to be done for all Express Mail, whether an EMCA is used for payment or not. For those Express Mail packages involving an EMCA, employees accepting the package are to determine if the EMCA number on the package is invalid by manually comparing the number against a list of EMCA numbers that the Service has determined to be invalid. If the number on the package is not found on that list, employees are to manually key in the EMCA number and the postage due so that the amount can be charged to the EMCA. An EMCA is to be charged for the Express Mail package when employees record a valid EMCA number at the acceptance point and scan the Express Mail barcode. A sample Express Mail label follows, showing EMCA numbers and other data to be recorded by employees when they accept a package. Postal Service employees are to manually record an Express Mail Corporate Account number supplied by the customer in this block. Employees are to scan a barcode pre-printed by the Service on each Express Mail label. After acceptance is recorded, the Service is to track each Express Mail package until it reaches the delivery station near the home or business receiving the package. At these stations, Service employees are to again electronically scan the barcode on the Express Mail label before the package is delivered. The Service has an Electronic Marketing and Reporting System (EMRS) to record, track, and report on Express Mail transactions. The system is used to receive and compare the Express Mail identification numbers scanned by postal employees at the post offices or mail-processing plants and delivery stations. If the comparisons show no match between the scanned barcodes entered at the points of acceptance and delivery, exception reports are to be prepared and made available to Service officials each workday for follow-up action. EMRS also generates reports showing (1) pieces of mail charged to invalid EMCAs; (2) Express Mail packages scanned at either the acceptance point or the delivery point, but not both; and (3) EMCAs with insufficient or negative fund balances. Along with these exception reports, the system generates other reports every 4 weeks for use by Service officials and, in some cases, EMCA customers. Among these reports are those that show Express Mail volume and revenue, on-time delivery rates, and refunds of postage for late delivery. Service officials and each EMCA customer are to receive a report every 4 weeks showing the beginning EMCA balance, number of packages mailed, amount of postage charged during the preceding 4-week period, ending and minimum balances, and any additional deposit required by the customer. Sherrill Johnson, Core Group Manager Raimondo Occhipinti, Evaluator-in-Charge Hugh Reynolds, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the U.S. Postal Service's controls over Express Mail Corporate Accounts (EMCA), focusing on: (1) whether there is any basis for the allegation of EMCA abuse; and (2) if so, what steps the Service is taking to help avoid or minimize EMCA revenue losses. GAO found that: (1) some mailers obtained Express Mail services using invalid EMCA in fiscal year 1995; (2) the Service did not collect the postage due or verify EMCA which were later determined to be invalid; (3) some EMCA customers overdrew their accounts and carried negative balances; (4) the Service plans to provide post office employees with automated access to valid EMCA numbers and fund balances, but has no plans to provide similar access to employees at mail-processing plants; (5) although the Service's planned actions to improve controls over EMCA operations will take a considerable amount of money and time to complete, they will not have addressed several other EMCA control weaknesses; (6) to determine whether EMCA continue to be necessary or desirable, the Service could evaluate the relative customer convenience, cost-effectiveness, and other relevant factors; and (7) if EMCA are continued, Service employees need to follow new and existing procedures designed to help prevent EMCA revenue losses.
Before I discuss our review of agencies’ fiscal year 2005 PARs, I would like to summarize IPIA, related OMB initiatives, and statutory requirements for recovery audits. The act, passed in November 2002, requires agency heads to review their programs and activities annually and identify those that may be susceptible to significant improper payments. For each program and activity agencies identify as susceptible, the act requires them to estimate the annual amount of improper payments and submit those estimates to the Congress. The act further requires that for programs for which estimated improper payments exceed $10 million, agencies are to report annually to the Congress on the actions they are taking to reduce those payments. The act requires the Director of OMB to prescribe guidance for federal agencies to use in implementing IPIA. OMB issued guidance in May 2003 requiring the use of a systematic method for the annual review and identification of programs and activities that are susceptible to significant improper payments. The guidance defines significant improper payments as those in any particular program that exceed both 2.5 percent of program payments and $10 million annually. It requires agencies to estimate improper payments annually using statistically valid techniques for each susceptible program or activity. For those agency programs determined to be susceptible to significant improper payments and with estimated annual improper payments greater than $10 million, IPIA and related OMB guidance require each agency to report the results of its improper payment efforts for fiscal years ending on or after September 30, 2004. OMB guidance requires the results to be reported in the Management Discussion and Analysis section of the agency’s PAR. In August 2004, OMB established Eliminating Improper Payments as a new program-specific initiative under the PMA. This separate improper payments PMA program initiative began in the first quarter of fiscal year 2005. Previously, agency efforts related to improper payments were tracked along with other financial management activities as part of the Improving Financial Performance initiative of the PMA. The objective of establishing a separate initiative for improper payments was to ensure that agency managers are held accountable for meeting the goals of IPIA and are therefore dedicating the necessary attention and resources to meeting IPIA requirements. With this new initiative, 15 agencies are to measure their improper payments annually, develop improvement targets and corrective actions, and track the results annually to ensure the corrective actions are effective. In August 2005, OMB revised Circular No. A-136, Financial Reporting Requirements, and incorporated IPIA reporting details from its May 2003 IPIA implementing guidance. Among other things, OMB Circular No. A-136 includes requirements for agencies to report on their risk assessments; annual improper payment estimates; corrective action plans; and recovery auditing efforts, including the amounts recovered in the current year. Section 831 of the National Defense Authorization Act for Fiscal Year 2002 contains a provision that requires all executive branch agencies entering into contracts with a total value exceeding $500 million in a fiscal year to have cost-effective programs for identifying errors in paying contractors and for recovering amounts erroneously paid. The legislation further states that a required element of such a program is the use of recovery audits and recovery activities. The law authorizes federal agencies to retain recovered funds to cover in-house administrative costs as well as to pay contractors, such as collection agencies. Agencies that are required to undertake recovery audit programs were directed by OMB to provide annual reports on their recovery audit efforts, along with improper payment reporting details in an appendix to their PARs. The fiscal year 2005 PARs, the second set of reports representing the results of agency assessments of improper payments for all federal programs, were due November 15, 2005. In our December 2005 report on the U.S. government’s consolidated financial statements for the fiscal years ended September 30, 2005 and 2004, which includes our associated opinion on internal control, we reported improper payments as a material weakness in internal control. Specifically, we reported that while progress had been made to reduce improper payments, significant challenges remain to effectively achieve the goals of IPIA. We reviewed the fiscal year 2005 PARs or annual reports for 32 of the 35 federal agencies that the Treasury determined to be significant to the U.S. government’s consolidated financial statements. Of those 32 agencies, 23 reported that they had completed risk assessments for all programs and activities. See appendix II for detailed information on each agency. This was the same number of agencies that reported having completed risk assessments in our prior year review. The remaining 9 agencies either were silent on IPIA reporting details in their PARs or annual reports or had not yet assessed the risk of improper payments for all their programs. In addition, we noted that selected agency auditors reviewed agencies’ risk assessment methodologies and identified issues of noncompliance or other deficiencies. For example, auditors for the Departments of Justice and Homeland Security cited agency noncompliance with IPIA in their fiscal year 2005 annual audit reports, primarily caused by inadequate risk assessments. The Department of Justice auditor stated that one agency component had not established a program to assess, identify, and track improper payments. The agency acknowledged this noncompliance in its PAR as well. The Department of Homeland Security (DHS) auditor reported that the department did not institute a systematic method of reviewing all programs and identifying those it believed were susceptible to significant erroneous payments. This was the second consecutive year that the auditor reported IPIA noncompliance for DHS. Although the auditors identified the agency’s risk assessment methodology as inadequate, DHS reported in its PAR that it had assessed all of its programs for risk. A third agency auditor reported that the Department of Agriculture needed to strengthen its program risk assessment methodology to identify and test critical internal controls over program payments totaling over $100 million. As I highlighted in my introduction, federal agencies’ reported estimates of improper payments for fiscal year 2005 exceeded $38 billion. This represents almost a $7 billion, or 16 percent, decrease in the amount of improper payments reported by 17 agencies in fiscal year 2004. On the surface, this appears to be good news. However, the magnitude of the governmentwide improper payment problem remains unknown. This is because, in addition to not assessing all programs, some agencies had not yet prepared estimates of significant improper payments for all programs determined to be at risk. Specifically, of the 32 agency PARs included in our review, 18 agencies reported improper payment estimates totaling in excess of $38 billion for some or all of their high-risk programs. The $38 billion represents estimates for 57 programs. Of the remaining 14 agencies that did not report estimates, 8 said they did not have any programs susceptible to significant improper payments, 5 were silent about whether they had programs susceptible to significant improper payments, and the remaining 1 identified programs susceptible to significant improper payments and said it plans to report an estimate by fiscal year 2007. Further details are included in appendix I. Regarding the reported $7 billion decrease in the governmentwide improper payment estimate for fiscal year 2005, we determined that this decrease was primarily due to a $9.6 billion reduction in the Department of Health and Human Services’s (HHS) Medicare program improper payment estimate, which was partially offset by more programs reporting estimates of improper payments, resulting in a net decrease of $7 billion. Based on our review, HHS’s $9.6 billion decrease in its Medicare program improper payment estimate was principally due to its efforts to educate health care providers about its Medicare error rate testing program and the importance of responding to its requests for medical records to perform detailed statistical reviews of Medicare payments. HHS reported that these more intensive efforts had dramatically reduced the number of “no documentation” errors in its medical reviews. The relevance of this significant decrease is that when providers do not submit documentation to justify payments, these payments are counted as erroneous for purposes of calculating an annual improper payment estimate for the Medicare program. HHS reported marked reductions in its error rate attributable to (1) nonresponses to requests for medical records and (2) insufficient documentation submitted by the provider. We noted that these improvements partially resulted from HHS extending the time that providers have for responding to documentation requests from 55 days to 90 days. These changes primarily affected HHS’s processes related to its efforts to perform detailed statistical reviews for the purposes of calculating an annual improper payment estimate for the Medicare program. While this may represent a refinement in the program’s improper payment estimate, the reported reduction may not reflect improved accountability over program dollars. Our work did not include an overall assessment of HHS’s estimating methodology. However, we noted that the changes made for the fiscal year 2005 estimate were not related to improvements in prepayment processes, and we did not find any evidence that HHS had significantly enhanced its preventive controls in the Medicare payment process to prevent future improper payments. Therefore, the federal government’s progress in reducing improper payments may be exaggerated because the reported improper payments decrease in the Medicare program accounts for the bulk of the overall reduction in the governmentwide improper payments estimate. Mr. Chairman, I think the only valid observation at this time is that improper payments are a serious problem, agencies are working on this issue at different paces, and the extent of the problem and the level of effort necessary to control these losses is as yet unknown. What is clear is that there is a lot of work to do in this area. Agency auditors have reported major management challenges related to agencies’ improper payment estimating methodologies and highlighted internal control weaknesses that continue to plague programs susceptible to significant improper payments. For example, the Department of Labor’s agency auditor reported that inadequate controls existed in the processing of medical bill payments for its Federal Employee Compensation Act program. As a result, medical providers were both overpaid and underpaid. Internal control weaknesses were also identified in the Small Business Administration’s (SBA) 7(a) Business Loan program. SBA did not consistently identify instances of noncompliance with its own requirements, resulting in improper payments. In another example, agency auditors for the Department of Education (Education) raised concerns about the methodology Education used to estimate improper payments for its Federal Student Aid program. The auditors reported that the methodology used did not provide a true reflection of the magnitude of improper payments in the student loan programs. To overcome these major management challenges, agencies will need to aggressively deploy more innovative and sophisticated approaches to correct such deficiencies and identify and reduce improper payments. Also, I would like to point out that the fiscal year 2005 governmentwide improper payments estimate of $38 billion did not include seven major programs, with outlays totaling over $227 billion for fiscal year 2005. OMB had specifically required these seven programs to report selected improper payment information for several years before IPIA reporting requirements became effective. After passage of IPIA, OMB’s implementing guidance required that these programs continue to report improper payment information under IPIA. As shown in table 1, the fiscal year 2005 governmentwide improper payment estimate does not include one of the largest federal programs determined to be susceptible to risk, HHS’s Medicaid program, with outlays exceeding $181 billion annually. Of these seven programs, four programs reported that they would be able to estimate and report on improper payments sometime within the next 3 fiscal years, but could not do so for fiscal year 2005. For the remaining three programs, the agencies did not estimate improper payment amounts in their fiscal year 2005 PARs and were silent about whether they would report estimates in the future. As a result, improper payments for these programs susceptible to risk will not be known for at least several years, even though these agencies had been required to report this information since 2002, with their fiscal year 2003 budget submissions under previous OMB Circular No. A-11 requirements. OMB reported that some of the agencies were unable to determine the rate or amount of improper payments because of measurement challenges or time and resource constraints, which OMB expects to be resolved in future reporting years. However, in the case of the HHS programs, the agency auditor recognized this lack of reporting as a reportable condition. In its fiscal year 2005 audit report on compliance with laws and regulations, the auditor reported that HHS potentially had not fully complied with IPIA because nationwide improper payment estimates and rates for significant health programs were under development and the agency did not expect to complete the estimation process until fiscal year 2007. Another factor which may affect the magnitude of improper payments is Hurricane Katrina, one of the largest natural disasters in our nation’s history. In order to respond to the immediate needs of disaster victims and to rebuild the affected areas, government agencies streamlined eligibility verification requirements for delivery of benefits and expedited contracting methods in order to commit contractors to begin work immediately. These expedited processes can increase the potential for improper payments. For example, from our recent review of the Federal Emergency Management Agency’s (FEMA) Individuals and Households Program we identified significant flaws in the process for registering disaster victims for assistance payments. We found limited procedures in place designed to prevent, detect, and deter certain types of duplicate and potentially fraudulent disaster registrations. As a result, we determined that thousands of registrants provided incorrect Social Security numbers, dates of birth, and addresses to obtain assistance and found that FEMA made duplicate assistance payments to about 5,000 of the nearly 11,000 debit card recipients. In one example of expedited contracting, the Department of Transportation (DOT) Office of Inspector General (OIG) determined that DOT had overpaid a contractor by approximately $32 million for services to provide buses for evacuating hurricane victims from the New Orleans area. According to the OIG, the overpayment occurred because DOT had made partial payments based on initial task estimates and without documentation that substantiated the dollar amount of services actually provided to date. Although DOT promptly recovered the funds, the nature of these types of exigencies to adequately respond to the hurricane victims illustrates that future improper payments are likely to occur. As a result, selected agencies, such as DHS and DOT, have said they plan to perform concentrated reviews of payments related to relief efforts to identify the extent of improper payments, develop actions to reduce these types of payments, and enhance internal controls for future relief efforts. Section 831 of the National Defense Authorization Act for Fiscal Year 2002 provides an impetus for applicable agencies to systematically identify and recover contract overpayments. Recovery auditing is another method that agencies can use to recoup detected improper payments. Recovery auditing focuses on the identification of erroneous invoices, discounts offered but not received, improper late penalty payments, incorrect shipping costs, and multiple payments for single invoices. Recovery auditing can be conducted in-house or contracted out to recovery audit firms. The law authorizes federal agencies to retain recovered funds to cover in-house administrative costs as well as to pay contractors, such as collection agencies. Any residual recoveries, net of these program costs, shall be credited back to the original appropriation from which the improper payment was made, subject to restrictions as described in legislation. As we previously reported, with the passage of this law, the Congress has provided agencies a much needed incentive for identifying and reducing their improper payments that slip through agency prepayment controls. The techniques used in recovery auditing offer the opportunity for identifying weaknesses in agency internal controls, which can be modified or upgraded to be more effective in preventing improper payments before they occur. For fiscal year 2005, OMB clarified the type of recovery auditing information that applicable agencies are to report in their annual PARs. Prior to fiscal year 2005, applicable agencies were only required to report on the amount of recoveries expected, the actions taken to recover them, and the business process changes and internal controls instituted or strengthened to prevent further occurrences. In addition, OMB was not reporting on a governmentwide basis agencies’ recovery audit activities in its annual report on agencies’ efforts to improve the accuracy and integrity of federal payments. In fiscal year 2005, OMB revised its recovery auditing reporting requirements and required applicable agencies to provide more detailed information on their recovery auditing activities. Specifically, in addition to the prior year requirements, agencies that entered into contracts with a total value exceeding $500 million annually were required to discuss any contract types excluded from review and justification for doing so. In addition, agencies were required to report, in table format, various amounts related to contracts subject to review and actually reviewed, contract amounts identified for recovery and actually recovered, and prior year amounts. For fiscal year 2005, 19 agencies reported entering into contracts with a total value in excess of the $500 million reporting threshold. These 19 agencies reported reviewing more than $300 billion in contract payments to vendors. From these reviews, agencies reported identifying about $557 million in improper payments for recovery and reported actually recovering about $467 million, as shown in table 2. In closing, I want to say that we recognize that measuring improper payments and designing and implementing actions to reduce them are not simple tasks and will not be easily accomplished. The ultimate success of the governmentwide effort to reduce improper payments depends, in part, on each federal agency’s continuing diligence and commitment to meeting the requirements of IPIA and the related OMB guidance. The level of importance each agency, the administration, and the Congress place on the efforts to implement the act will determine its overall effectiveness and the level to which agencies reduce improper payments and ensure that federal funds are used efficiently and for their intended purposes. With budgetary pressures rising across the federal government, and the Congress’s and the American public’s increasing demands for accountability over taxpayer funds, identifying, reducing, and recovering improper payments become even more critical. Fulfilling the requirements of IPIA will require sustained attention to implementation and oversight to monitor whether desired results are being achieved. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have. For more information regarding this testimony, please contact McCoy Williams, Director, Financial Management and Assurance, at (202) 512-9095 or by e-mail at williamsm1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this testimony included Carla Lewis, Assistant Director; Francine DelVecchio; Christina Quattrociocchi; and Donell Ries. Programs that the agency reported were not susceptible to significant improper 38. Section 8 Tenant 39. Section 8 Project Development Block Grant (Entitlement Grants, States/Small Cities) Programs that the agency reported were not susceptible to significant improper 48. All programs and 49. All programs and Education Grants and Cooperative Agreements 51. All programs and Program (Civil Service Retirement System and Federal Employees Retirement System) See table 1 of this testimony. Agency fiscal year 2005 PAR or annual report information not available as of the end of our fieldwork. Agency did not address improper payments or the Improper Payments Information Act (IPIA) requirements for this program in its fiscal year 2005 PAR or annual report. Financial Management: Challenges in Meeting Governmentwide Improper Payment Requirements. GAO-05-907T. Washington, D.C.: July 20, 2005. Financial Management: Challenges in Meeting Requirements of the Improper Payments Information Act. GAO-05-605T. Washington, D.C.: July 12, 2005. Financial Management: Challenges in Meeting Requirements of the Improper Payments Information Act. GAO-05-417. Washington, D.C.: March 31, 2005. Financial Management: Fiscal Year 2003 Performance and Accountability Reports Provide Limited Information on Governmentwide Improper Payments. GAO-04-631T. Washington, D.C.: April 15, 2004. Financial Management: Status of the Governmentwide Efforts to Address Improper Payment Problems. GAO-04-99. Washington, D.C.: October 17, 2003. Financial Management: Effective Implementation of the Improper Payments Information Act of 2002 Is Key to Reducing the Government’s Improper Payments. GAO-03-991T. Washington, D.C.: July 14, 2003. Financial Management: Challenges Remain in Addressing the Government’s Improper Payments. GAO-03-750T. Washington, D.C.: May 13, 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Improper payments are a longstanding, widespread, and significant problem in the federal government. The Congress enacted the Improper Payments Information Act of 2002 (IPIA) to address this issue. Fiscal year 2005 marked the second year that federal agencies governmentwide were required to report improper payment information under IPIA. One result of IPIA has been increased visibility over improper payments by requiring federal agencies to identify programs and activities susceptible to improper payments, estimate the amount of their improper payments, and report on the amounts of improper payments and their actions to reduce them in their annual performance and accountability reports (PAR). GAO was asked to testify on the progress being made by agencies in complying with requirements of IPIA and the magnitude of improper payments. As part of the review, GAO looked at (1) the extent to which agencies have performed risk assessments, (2) the annual amount of improper payments estimated, and (3) the amount of improper payments recouped through recovery audits. The federal government continues to make progress in identifying programs susceptible to the risk of improper payments in addressing the new IPIA requirements. At the same time, significant challenges remain to effectively achieve the goals of IPIA. The 32 fiscal year 2005 PARs GAO reviewed show that some agencies still have not instituted systematic methods of reviewing all programs and activities, have not identified all programs susceptible to significant improper payments, or have not annually estimated improper payments for their high-risk programs as required by the act. The full magnitude of the problem remains unknown because some agencies have not yet prepared estimates of improper payments for all of their programs. Of the 32 agencies reviewed, 18 reported over $38 billion of improper payments in 57 programs. This represented almost a $7 billion, or 16 percent, decrease in the amount of improper payments reported by 17 agencies in fiscal year 2004. However, the governmentwide improper payments estimate does not include 7 major agency programs with outlays totaling about $228 billion. Further, agency auditors have identified major management challenges related to agencies' improper payment estimating methodologies and significant internal control weaknesses for programs susceptible to significant improper payments. In addition, two agency auditors cited noncompliance with IPIA in their annual audit reports. For fiscal year 2005 PARs, agencies that entered into contracts with a total value exceeding $500 million annually were required to report additional information on their recovery audit efforts. Nineteen agencies reported reviewing over $300 billion in vendor payments, identifying approximately $557 million to be recovered, and actually recovering about $467 million.
According to IRS, the agency’s overall mission is to collect the proper amount of tax revenue at the least cost; serve the public by continually improving the quality of its products and services; and perform in a manner warranting the highest degree of public confidence in IRS’ integrity, efficiency, and fairness. Its strategic objectives are to improve customer service, increase compliance with the tax laws, and increase IRS’ productivity. Essentially, IRS is striving to encourage taxpayers to pay what they owe, reduce taxpayers’ cost to get answers to their questions and prepare their tax returns, and reduce IRS’ cost to collect federal taxes. In the mid-1980s, IRS’ Strategic Business Plan first provided IRS’ mission statement, objectives, general strategies, and goals. IRS created the Fiscal Year 1995-2001 Business Master Plan (BMP) to incorporate IRS’ vision and long-range objectives. The fiscal year 1996 BMP formalized IRS’ “measures hierarchy,” which was intended to link IRS’ mission, objectives, and annual performance goals with respective programs. For fiscal year 1997, IRS replaced the BMP with the Strategic Plan and Budget and the Annual Performance Plan. On September 30, 1997, IRS released a new Strategic Plan that updates its strategic measures. IRS expects to use these documents to provide guidance to its field offices and to implement the Results Act. According to IRS, improving taxpayer service is one of its highest priorities, and it has a variety of programs and operational units to assist taxpayers in meeting their federal tax obligations. A primary source of taxpayer assistance is IRS’ 24 customer service centers, which are to answer calls from taxpayers who have questions about the tax laws, where to file returns, or the status of their accounts and refunds. According to IRS, in fiscal year 1996, the centers answered over 99 million taxpayer calls about tax law and procedures. Other sources of assistance include IRS’ walk-in sites, taxpayer education and outreach programs, Problem Resolution Program, and Internet web site. According to IRS, about 440 walk-in sites helped almost 6.4 million taxpayers in fiscal year 1996 with tax forms, questions about their accounts, or preparing tax returns. In addition, over the past several months, IRS has been working to improve its measures and has consulted with many stakeholders. A task force has been formed with representatives from Treasury and OMB to develop a “balanced scorecard.” IRS’ taxpayer education and outreach programs assist millions of taxpayers at various community locations, often with the help of volunteers and nonprofit organizations. For example, almost 12.7 million taxpayers in fiscal year 1996 received free tax information and return preparation through IRS’ Volunteer Income Tax Assistance, Tax Counseling for the Elderly, and other outreach programs. The Problem Resolution Program staff assists taxpayers who have such problems as repeated unsuccessful attempts to resolve an issue or a pending IRS enforcement action that might cause undue hardship, such as the seizure of a taxpayer’s property. Also, taxpayers may use the Internet to obtain forms and instructions, publications, information on tax topics, and press releases. According to IRS, its web site had 73 million “hits” in fiscal year 1996. IRS has had several efforts under way to improve its performance measures. For example, the agency established the Measures Advisory Group, comprising field and National Office executives, in part to provide advice and recommendations on the agency’s performance measures. As the group suggested, IRS recently developed three new performance measures for its customer service centers: (1) customers successfully served per dollars expended, (2) dollars collected per dollars expended, and (3) taxpayers gaining access to telephone assistance as a percentage of demand. IRS’ September 30, 1997, Strategic Plan included the first measure as a strategic-level indicator for increasing productivity and the third measure as a strategic-level indicator for improving customer service. IRS plans to use the second measure as a program-level indicator. The Results Act requires federal agencies to measure the results of their programs and operations. Agencies are expected to set goals, measure performance, make needed improvements, and report results. The Results Act required executive agencies, no later than September 30, 1997, to have developed strategic plans covering a period of at least 5 years and have submitedt them to Congress and the Office of Management and Budget (OMB). Strategic plans are intended to be the framework for each agency’s performance measurement system. The Results Act also requires agencies to develop annual performance plans that are intended to reinforce the link between strategic goals and day-to-day activities. The first annual performance reports, covering fiscal year 1999, are due by March 31, 2000. Implementation of the Results Act requires adequate and reliable performance measures that are useful in improving agency and program performance, improving accountability, or supporting policy decisionmaking. IRS recognizes that collecting such data can be costly and difficult. As with other federal agencies, IRS will have to balance the cost of data collection efforts against the need to ensure that the collected data are complete, accurate, and consistent enough to document performance and support decisionmaking at various organizational levels. In conjunction with developing the required strategic plans, federal agencies are required to solicit views of other stakeholders to clarify their missions and reach agreement on their goals. This statutory requirement was, in part, designed to address instances where Congress, the agency, and other interested parties may disagree because of competing priorities. Our objectives were to (1) describe IRS’ system of performance measures and (2) identify any challenges IRS faces in developing and implementing performance measures to gauge its efforts to reduce taxpayer burden through improved customer service. To describe IRS’ performance measures, we reviewed IRS’ fiscal year 1997 Strategic Plan and Budget, including the Annual Performance Plan; its updated September 30, 1997, Strategic Plan; and other planning documents, including IRS’ fiscal year 1996 Business Master Plan and Business Review. We also interviewed the staff of the National Director of Compliance Research and the National Director and staff of the Strategic Planning Division, who are responsible for developing the Annual Performance Plan; and officials in the Analysis and Studies Division, who conduct the Business Review and are responsible for establishing selected strategic measures. To identify any challenges IRS faces in developing and implementing performance measures to gauge its efforts to reduce taxpayer burden through improved customer service, we reviewed IRS’ strategic-level measures to improve customer service in its September 30, 1997, Strategic Plan for fiscal years 1997 through 2002; its fiscal year 1997 Strategic Plan; and selected program-level customer service measures in its Annual Performance Plan for fiscal year 1997. Using criteria drawn from the steps and critical practices set forth in GAO’s Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996), we examined the strategic-level and new customer service measures to determine if they were based on sound methodologies and were useful in improving agency and program performance and in supporting agency policy decisionmaking. For example, we analyzed the strategic-level taxpayer burden indicator to determine whether it (1) was linked to the burden IRS can influence and the services it provides; and (2) measured the full range of costs that taxpayers’ incur, including the costs they incur after they file their returns. Similarly, we examined IRS’ definition of initial contact resolution to determine what services IRS measures and the contacts that are counted as successful. We reviewed the fiscal year 1997 Performance Plan to determine whether IRS had comparable program-level indicators for the different sources of assistance, including customer service centers, walk-in sites, the Problem Resolution Program, the Education and Outreach Program, and the Internet web site. We selected these five units or programs because they are primary sources of assistance for taxpayers who need help from IRS. Additionally, we examined IRS’ definition of “customers successfully served” to determine whether IRS considered the quality of the service, such as how many times the taxpayers had to call before being assisted. We also interviewed IRS’ National Office, Atlanta Service Center, Southeast Region, Georgia District, and Nashville District officials who were responsible for either developing, implementing, or using the customer service performance measures to determine the status of IRS’ system of performance measures and to obtain an understanding of the newly developed customer service measures. We did our work from September 1996 through December 1997 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Commissioner of Internal Revenue or his designated representative. Responsible IRS officials, including Chief Management and Administration; the National Director, Strategic Planning Division; and staff of the Executive Officer for Customer Service provided oral comments and factual clarifications in a January 21, 1998, meeting. We have incorporated those comments in the report where appropriate. The Commissioner of IRS provided us written comments on January 23, 1998, which are discussed near the end of this report and reproduced in appendix III. IRS’ system of performance measures has three tiers: mission, strategic, and program. IRS has 1 mission effectiveness measure, 3 strategic objectives with 9 measures, and 111 program measures, as depicted in figure 1. See appendix I for definitions of mission-level, strategic-level, and selected customer service program-level measures. IRS’ mission-level effectiveness indicator (MEI) is intended to measure the agency’s performance in accomplishing its primary mission of collecting the proper amount of tax revenue at the least cost. The MEI compares total revenue collected during a fiscal year, less the cost of collecting the revenue (the sum of IRS’ budget and estimated taxpayers’ costs), to the revenue that should have been collected if all taxpayers had paid their full liability. With the MEI, IRS has a mission-level performance indicator that includes the taxpayer compliance rate, the cost or burden to taxpayers of complying with the tax laws, and the cost of operating IRS. The second tier of measures includes nine performance indicators that are intended to gauge IRS’ progress in achieving its three strategic objectives—improve customer service, increase compliance, and increase productivity. These three objectives link directly to the MEI, because improving customer service reduces taxpayer burden, increasing compliance increases the compliance rate, and increasing productivity reduces IRS cost. To improve customer service, IRS seeks to better serve the public, reduce taxpayer burden, and increase public confidence in the tax administration system. IRS seeks to improve taxpayer access, resolve as many inquiries as possible on the first contact, and increase customer satisfaction. IRS states that improving customer service supports its mission to collect the proper amount of tax at the least cost to taxpayers and IRS. IRS uses five indicators to gauge its progress in improving customer service: (1) taxpayer burden cost for IRS to collect $100, (2) initial contact resolution rate, (3) toll-free telephone level of access, (4) tax law accuracy rate for taxpayer inquiries, and (5) customer satisfaction rates (being developed at the time our review). According to IRS, the taxpayer burden measure is the principal measure of its efforts to improve customer service. To increase compliance, IRS seeks to encourage and assist taxpayers to file timely and accurate returns and to pay their taxes on time. If taxpayers do not comply, IRS intends to take appropriate action to force the taxpayers to comply. Also, to help improve customer satisfaction, IRS intends to treat taxpayers with courtesy, fairness, and professionalism. IRS uses two indicators to gauge its progress in increasing compliance: (1) total collection percentage and (2) total net revenue collected. According to IRS, the principal measure of taxpayer compliance is the total collection percentage, or the comparison of the revenue IRS collects with the total tax liability. To increase productivity, IRS seeks to continually improve operations and the quality of products and services it provides to customers through reengineering and a highly skilled work force. IRS states that accomplishing this objective will increase compliance, improve customer service, and reduce the cost of tax administration. To gauge its progress toward accomplishing this objective, IRS uses two indicators: (1) budget cost to collect $100 and (2) customers successfully served per dollars expended. According to IRS, its principal productivity measure is the amount it spends to collect $100 as measured by comparing IRS’ budget to the revenues it collects. The third tier of measures—111 in all—is intended to gauge how well specific IRS programs are performing. IRS’ fiscal year 1997 Annual Performance Plan had 16 submission processing measures, 30 customer service measures, and 38 compliance measures. The plan also had 8 Service-wide measures for which all IRS executives and managers shared responsibility and 19 other measures specific to such areas as resource management and business operations. Appendix II provides a complete list of IRS’ 30 program-level customer service performance measures. IRS is striving to develop and implement a results-oriented performance measurement system to meet the requirements of the Results Act. However, IRS faces some difficult challenges in measuring the results of its efforts to reduce taxpayer burden through improved customer service. The key challenges we identified are (1) developing a reliable measure of taxpayer burden, including the portion that IRS can influence; (2) developing measures that can be used to compare the effectiveness of the various customer service programs; and (3) refining or developing new measures that gauge the quality of the services provided. As IRS refines its customer service measures, it must consider the costs of implementing the measures, including the costs of collecting and analyzing data over time. IRS’ taxpayer burden indicator is intended to measure taxpayers’ cost for IRS to collect $100. IRS measures taxpayer burden by using a model that estimates the time taxpayers spend on each tax form using variables such as the number of lines on a tax form. The burden estimate excludes the time and costs taxpayers face after they file their tax returns, such as responding to IRS notices and audits. Additionally, it is not linked to important IRS services to assist taxpayers in meeting their tax obligations. As a result, IRS’ burden estimates may not reflect reductions in taxpayer burden that are attributable to these services. The flaws in the burden measure also limit the usefulness of IRS’ mission effectiveness indicator, because burden is a key component of this indicator. IRS recognizes the limitations of its burden measure and is looking for alternatives. IRS calculates its burden indicator by using a model developed by Arthur D. Little, Inc. The model estimates the time a taxpayer spent on each tax form using variables, such as the number of lines on the form, number of words and pages in the related instruction booklet, and the number of references to the Internal Revenue Code. IRS then converts this total time estimate to dollar costs by multiplying the total time by IRS’ estimate of the value of a taxpayer’s hour. IRS’ burden model excludes the time and costs taxpayers incur after tax forms are filed, such as the time taxpayers spend inquiring about the status of a tax refund or responding to notices, examinations, or other IRS-initiated compliance activities. In a recent draft issue paper, IRS identified several other shortcomings in the model, including weaknesses in the underlying assumptions of the model. For example, the model uses the number of lines on a form to estimate the form’s burden, even though additional lines may make the form simpler or easier to understand. Also, the draft issue paper said that the 1983 data underlying the model are outdated and cited methodological errors in the development of the model. The limitations of the taxpayer burden measure affect IRS’ mission effectiveness indicator because taxpayer burden is one of the four elements of this indicator. The indicator compares total revenue collected during a fiscal year, less the cost of collecting the revenue (the sum of IRS’ budget and estimated taxpayers’ costs) with the revenue that should have been collected if all taxpayers had paid their full liability. The usefulness of IRS’ overall measure is questionable considering the limitations of the taxpayer burden measure. IRS recognizes the limitations of the current methodology for measuring burden and in 1995 sought help in developing an improved methodology for measuring all facets of taxpayer burden. Specifically, IRS issued a request for proposals seeking contractors to develop an approach for measuring taxpayer burden, including the burden after forms are filed. However, according to IRS, no contractors were interested in doing the work. The lack of response to IRS’ request may reflect the difficulty of measuring overall compliance burden. In our December 9, 1994, testimony before the Subcommittee on Oversight, House Committee on Ways and Means, we discussed the difficulties of measuring taxpayer burden and reported that a reliable estimate of the overall burden taxpayers incur to comply with the tax laws was not available. As a part of our study, we spoke with several business and tax professionals, who told us that the complexity of the Internal Revenue Code, compounded by the frequent changes made to the Code, is part of what makes federal tax compliance so burdensome. Recently, IRS initiated another effort to obtain a contractor to develop an improved burden measurement model and is now in the initial stages of determining contractor interest. In the short term, IRS plans to expand its current measure of taxpayer burden to include contact and enforcement burden, such as the burden taxpayers incur when responding to IRS notices, telephone calls, and audits. Despite recognizing the shortcomings in the current taxpayer burden measure, IRS has set goals for reducing burden based on the measure. It then rolls these goals up into its mission effectiveness indicator. To show progress through this indicator, IRS must reduce the number of lines on tax forms or worksheets, reduce the number of words and pages in instructions, or take actions that affect the variables in the Little model. However, the model does not distinguish between lines on forms that add to burden and lines that reduce the burden by making the calculation of tax liability easier. Unless additional analysis is done to assess how eliminating particular lines on forms affects burden, IRS could take actions to meet its goals that actually increase taxpayer burden. Furthermore, most IRS customer service programs have no effect on IRS’ measure of taxpayer burden. Devising a comprehensive measure to gauge the costs taxpayers incur to meet their federal tax obligations is a difficult task and offers a significant challenge for IRS. First, IRS would need to devise a means to capture the costs taxpayers incur after they file their returns. This may be difficult to do, because the costs could vary substantially depending on the circumstances of the different taxpayers. For example, providing information to support a tax return may not cost very much when compared to the cost of preparing for and responding to an audit. Second, because of the limitations of the Little model, IRS must decide whether to revise the model or to devise another means to estimate the costs taxpayers incur to prepare and file their tax returns. Third, IRS must measure the elements of burden it can influence as opposed to the burden caused by such things as changes in the tax code. A reliable taxpayer burden measure would allow IRS to make decisions on how to allocate resources to best reduce the burdens taxpayers face to meet their tax obligations. Finally, as IRS refines its taxpayer burden measure, it will be faced with devising an efficient means for collecting and analyzing the data to measure burden over time. Otherwise, the cost of measuring burden could exceed the benefits. Among other things, the Results Act requires agencies to develop and implement measures that are useful in improving program performance or in supporting policy decisionmaking. One way IRS can do this is to develop measures that can be used to compare the effectiveness of the different customer service programs. Our analysis of IRS’ fiscal year 1997 program-level measures for customer service points out the need for such measures, but the history of the initial contact resolution measure demonstrates the difficulty IRS faces in implementing such measures. Although IRS has three new strategic-level customer service measures for fiscal year 1998, similar to the initial contact resolution measure, two are limited to measuring telephone assistance. When taxpayers need assistance from IRS, among other things, they can call a customer service center, visit a walk-in site, call or visit a problem resolution office, call or visit an outreach facility, or access IRS’ Internet web site. IRS’ 1997 Annual Performance Plan had 27 program-level indicators for its customer service centers and 3 for its Problem Resolution Program. However, the plan had no program-level indicators to measure the performance of the walk-in sites, education and outreach programs, and the Internet web site, even though these three sources of assistance provide a range of services to help taxpayers file their returns and otherwise comply with the tax laws and reporting requirements. One of IRS’ fiscal year 1997 strategic-level indicators for measuring its progress in improving customer service was the initial contact resolution rate. This measure is intended to gauge IRS’ progress in satisfactorily resolving all issues resulting from a taxpayer’s first inquiry to IRS—formerly known as the “one-stop service” concept. Providing one-stop service would reduce taxpayer burden and the demand for IRS services. However, since the August 1991 implementation of its one-stop service goal, IRS has often redefined the goal and the types of contacts that are counted as successful and plans to change the goal again. Originally, IRS’ measurement focused on account-related taxpayer inquiries at district toll-free telephone sites. In our August 1994 report, we concluded that IRS was overstating its successes for one-stop service because it was counting calls that did not fully resolve the taxpayers’ questions. We recommended that IRS develop better measures to exclude those instances where taxpayers would likely need to contact IRS again about the same matter. We also recommended that IRS measure all types of taxpayer inquiries, including all telephone contacts, service center correspondence, and walk-in inquiries. In March 1995, IRS changed the name of the measure to “initial contact resolution” and incorporated our recommendations to include correspondence and walk-in inquiries. Officials told us that IRS was establishing a new definition for fiscal year 1998 that would be limited to telephone operations, which was recommended in a recent internal audit report. The internal audit report did not address the need for measuring other types of IRS assistance, such as education and outreach and walk-in. Essentially, the report concluded that IRS’ initial contact resolution measure should be limited to telephone operations, because the inclusion of correspondence would add responses to notices that had, in the past, taken up to 60 days to resolve. An IRS official told us that the initial contact resolution measure would not include walk-ins because (1) IRS does not have a system in place to measure the rate; (2) it is very difficult to monitor walk-in contacts in a valid way without standing over the individual customer service representative; and (3) the volume is relatively small compared to telephone contacts and paper correspondence and, as a result, would not affect the measure very much. Because IRS’ customer service programs vary, without comparable measures, IRS is unable to compare the performance and effectiveness of the different customer service programs. Comparable measures for the customer service programs would allow IRS to monitor the performance and compare the effectiveness of the different programs. Such comparisons would assist IRS in making decisions on how to allocate resources among the different programs to maximize results. However, developing comparable measures of effectiveness will be difficult, primarily because of the range of services and options taxpayers have when they need assistance from IRS. Also, IRS would need to consider the costs of collecting and analyzing the data to measure performance of the different programs. IRS added three new strategic-level measures in its September 30, 1997, Strategic Plan: (1) toll-free telephone level of access, which is intended to compare the number of calls attempted to the number of calls answered; (2) tax law accuracy rate for taxpayer inquiries, which is intended to measure the accuracy of tax law information provided to taxpayers through the toll-free telephone assistance program; and (3) customer satisfaction rates. Similar to the initial contact resolution measure, the first two measures are also limited to the telephone program, even though taxpayers have other sources, such as walk-in sites and the Internet, to obtain answers to their tax law questions. At the time of our review, IRS was in the process of determining how to measure customer satisfaction. One of IRS’ new strategic-level productivity measures for its 24 customer service centers for fiscal year 1998 is “customers successfully served per dollar expended.” Our analysis of this measure points out the need to better measure the quality of services provided. According to IRS’ definition, successfully served means a taxpayer received “an accurate response to a call or resolution of a case.” This definition does not consider other elements that would affect what a taxpayer may consider as successful service, such as the number of times the taxpayer called before being assisted, how long the taxpayer had to wait before being served, and the courtesy and professionalism of the assistor. As a result, the taxpayer, although served, may not believe he or she was “successfully” served. IRS’ strategic-level customer service measures have similar limitations. For example, the initial contact resolution measure is intended to gauge IRS’ performance in resolving issues resulting from a taxpayer’s first inquiry to IRS. The tax law accuracy rate measure gauges the extent to which taxpayers are provided correct answers. IRS does not measure such things as how long it took to resolve the issues or how courteous and professional the assistors were when interacting with the taxpayers or whether the need for the contact could have been prevented. Revising measures to better gauge the quality of assistance is a major challenge for IRS. For example, developing measures of timeliness will be very difficult because of the different programs and the range of services they provide. Also, IRS would have to devise a means to capture such data. As with other measures, IRS may be faced with making trade-offs between how to refine the measures and the cost of collecting the needed data. Although statutory requirements are to be the starting point for agency mission statements, Congress, the executive branch, and other interested parties may all disagree about a given agency’s mission and goals. The Results Act seeks to address such situations by requiring federal agencies to consult with Congress and solicit the views of other stakeholders in developing their strategic plans. Stakeholder involvement is important to help agencies ensure that their efforts and resources are targeted at the highest priorities. Obtaining stakeholder involvement is especially important for IRS as it seeks to balance its efforts and resources between assisting taxpayers and enforcing compliance with the nation’s tax laws. Stakeholders could assist IRS in devising performance measures that would enhance IRS’ ability to make more informed decisions about how to allocate its resources between the competing demands of assistance and enforcement. IRS is striving to develop and implement a results-oriented performance measurement system to meet the requirements of the Results Act. However, IRS faces some difficult challenges as it develops and implements its efforts to reduce taxpayer burden through improved customer service. IRS will be faced with devising reliable measures that are useful in improving agency and program performance, improving accountability, or supporting policy decisionmaking. At the same time, IRS will be faced with making decisions on how to minimize the costs of collecting data and measuring results over time. IRS’ taxpayer burden measure is not a useful guide to IRS performance because it is based on flawed methodology that does not link to the burdens IRS influences and the various services it provides. Additionally, it does not measure burdens taxpayers face after they file their tax returns. As a result, most IRS customer service programs that IRS characterizes as customer service have no effect on IRS’ measure of taxpayer burden. IRS does not have a comprehensive set of customer service indicators that gauges the full range of taxpayer services. As a result, IRS is unable to compare the performance of the different customer service programs and make funding decisions based on the programs’ costs and benefits—a key goal of the Results Act. Developing comparable measures for the different programs will be difficult, primarily because of the range of assistance the different programs provide. Similarly, IRS’ customer service measures do not adequately measure the quality of the services taxpayers receive from IRS. Although some of the measures gauge the extent to which taxpayers’ issues are resolved or the accuracy of the information IRS provides, they do not measure such things as how long it takes IRS to resolve the issues or how courteous and professional the assistors are when interacting with the taxpayers. Revising the measures to better gauge the quality of assistance is a major challenge for IRS, primarily because of the many different programs and the range of services they provide. Also, IRS would have to devise a means to capture such data. Devising ways to measure taxpayer burden and overcoming the other limitations we identified offer significant challenges for IRS as it strives to meet the requirements of the Results Act. Not only will IRS be faced with devising consistent, results-oriented measures for a range of taxpayer services, it will also be faced with making decisions on how to minimize the costs of collecting data and measuring results over time. In doing so, IRS is also faced with balancing competing priorities. To balance these competing priorities, it is essential that IRS continue to involve those who are served by IRS—the taxpayers—as well as other stakeholders, such as Congress and the Office of Management and Budget. As IRS refines its customer service performance measures, we believe it is essential that IRS make the measures useful for managing the different customer service programs, allocating resources, improving accountability, and supporting policy decisions. Accordingly, as IRS refines its customer service measures, we recommend that the Commissioner of Internal Revenue direct the appropriate officials to work to develop performance indicators that cover the full range of IRS’ customer service programs. We requested comments on a draft of this report from the Commissioner of Internal Revenue or his designated representative. In a January 21, 1998, meeting responsible IRS officials, including the Chief Management and Administration; the National Director, Strategic Planning Division; and staff of the Executive Officer for Customer Service provided oral comments and some factual clarifications, which we incorporated in the report where appropriate. The Commissioner of IRS provided us written comments on January 23, 1998 (see app. III). He concurred with the report’s findings and recommendation. He said that IRS recognizes the critical importance of measuring customer service and is working to improve its measures, including consulting many stakeholders. He also said that IRS is working with a contractor to develop customer satisfaction surveys for all business lines that interact with the public. On January 28, 1998, after our receipt of IRS’ comments on the draft of this report, the IRS Commissioner announced a conceptual framework for a proposal to reorganize IRS to better align its activities into organizational elements serving different types of taxpayers (e.g., individuals, large corporations). Although details of this proposed reorganization are not available, and any IRS reorganization may be affected by other proposals for IRS restructuring under consideration by Congress, we note that the customer service measures discussed in this report and any IRS plans to improve them may be affected by these possible organizational changes. We are sending copies of this report to the Subcommittee’s Ranking Minority Member, the Chairmen and Ranking Minority Members of the House Committee on Ways and Means and the Senate Committee on Finance, various other congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, the Director of the Office of Management and Budget, and other interested parties. Copies will also be made available to others upon request. Major contributors to this report are listed in appendix IV. Please contact me on (202) 512-9110 if you have any questions. These definitions are as stated in the Internal Revenue Service’s (IRS) September 30, 1997, Strategic Plan for fiscal years 1997 through 2002, except for minor changes we made for clarity. We did not validate these definitions. Mission Effectiveness Indicator: This compares the revenue IRS expects to collect during a fiscal year, less the cost of collecting that revenue, with the amount of revenue that IRS would collect if all tax obligations were honored. The four components of this measure are budget, total revenue, burden, and total tax liability. Budget: This is the amount of money appropriated by Congress or requested by IRS through Treasury and the Office of Management and Budget (OMB). Total revenue: This is all revenue collected by IRS, including revenue resulting from enforcement activities, but excluding refunds. Burden: This is a “monetized” estimate of the number of burden hours placed on taxpayers to meet their tax obligations. The calculation is based on a methodology developed by Arthur D. Little, Inc. Total tax liability: This is an estimate of the amount of individual income, corporate income, and employment taxes that should have been paid in a given year, if all taxes that were legally owed had been paid. Improve Customer Service Objective: The purpose of this objective is to better serve the public, reduce taxpayer burden, and increase public confidence in the tax administration system. IRS seeks to improve taxpayer access, resolve as many inquiries as possible on the first contact, and increase customer satisfaction. Taxpayer burden cost (in dollars) for IRS to collect $100: This ratio measures the private sector costs compared to the cost for IRS to collect $100 in net tax revenue. Net tax revenue includes all revenue collected (income, employment, estate and gift, and excise taxes) by IRS in a fiscal year, less refunds. Private sector costs cover the paperwork burden imposed on the public as a result of the federal tax reporting system administered by IRS. Private sector costs of the paperwork burden are based on the estimated time individual and business taxpayers spend keeping tax records, learning about tax laws, preparing tax forms, and sending tax forms to IRS. Taxpayer paperwork burden is converted from time to dollars by multiplying total time by the estimated value of a taxpayer’s hour. Initial contact resolution rate: This measures the successful resolution of all issues resulting from the taxpayer’s first inquiry, telephone only. Toll-free telephone level of access: This is the percentage of calls answered. The percentage is computed by comparing the number of calls attempted (demand) to the number answered in all components of the Customer Service function (Automated Collection System, Customer Service Toll-free, and the Centralized Inventory and Distribution System). Tax law accuracy rate for taxpayer inquiries: This measures the rate at which IRS’ toll-free telephone assistance program provides taxpayers accurate tax information. Customer satisfaction rates: This measure was under development at the time of our review. Increase Compliance Objective: The purpose of this objective is to encourage and assist taxpayers to voluntarily file timely and accurate returns and to pay on time and, if taxpayers do not comply, to take appropriate compliance actions. Total collection percentage: This is the ratio of total collections to total estimated true tax liability. Total net revenue collected: This is all revenue collected by IRS, including revenue resulting from enforcement activities, but excluding refunds. Increase Productivity Objective: The purpose of this objective is to continually improve operations and the quality of products and services provided to taxpayers, using systems management tools and a highly skilled work force. Budget cost to collect $100 in revenue: This ratio measures the IRS budget cost of collecting $100 in net tax revenue. Net tax revenue includes all revenue collected (income, employment, estate and gift, and excise taxes) by IRS in a fiscal year, less refunds. Customers successfully served per dollars expended: This measure calculates the average cost for IRS’ 24 customer service centers to accurately respond to a taxpayer’s inquiry. Number of calls answered: This is the number of calls accepted by the Automatic Call Distributor system, including calls where the caller chooses Tele-Tax (an interactive, self-directed system) during and after business hours. Number of assistor calls answered: This is the number of calls accepted by the Automatic Call Distributor system and answered by an assistor. Percentage of scheduled calls answered: This is the number of calls answered as a percentage of the number of calls expected to be answered by the call sites, considering the level of staffing. Level of access: This is the number of taxpayers who receive telephone assistance as a percentage of the total number of taxpayers seeking assistance. Number of calls answered per full-time equivalent (FTE) employee: This is the number of calls accepted by the Automatic Call Distributor System, minus calls abandoned, divided by the number of FTE employees assigned to answer taxpayers’ calls. These selected customer service program-level definitions are included in IRS’ fiscal year 1997 Annual Performance Plan. A. Carl Harris, Assistant Director Catherine H. Myrick, Evaluator-in-Charge Katherine P. Chenault, Senior Evaluator Ronald W. Jones, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Internal Revenue Service's (IRS) efforts to implement the Government Performance and Results Act (GPRA), focusing on: (1) IRS's system of performance measures; and (2) the challenges IRS faces in developing and implementing performance measures to gauge its efforts to reduce taxpayer burden through improved customer service. GAO noted that: (1) IRS is striving to develop and implement a results-oriented performance measurement system that will meet the requirements of GPRA; (2) however, IRS faces some difficult challenges in measuring the results of its efforts to reduce taxpayer burden through improved customer service; (3) IRS has a three-tiered system of performance measures; (4) at the highest level, IRS has a mission effectiveness indicator, which is intended to measure the agency's overall performance in collecting the proper amount of tax revenue at the least cost or burden to the government and the taxpayer; (5) the second level of indicators is intended to gauge IRS' progress in meeting its strategic objectives to improve customer service, increase taxpayer compliance, and increase its productivity; (6) to gauge its progress in improving customer service, IRS uses five initial indicators: (a) taxpayer burden cost for IRS to collect $100; (b) initial contact resolution rate for taxpayer inquiries; (c) toll-free telephone level of access; (d) tax law accuracy rate for taxpayer inquiries; and (e) customer satisfaction rates (being developed at the time of GAO's review); (7) the third level of indicators is intended to measure the accomplishments of specific IRS programs or operations, such as IRS' toll-free telephone operations; (8) IRS' 1997 Annual Performance Plan had 30 program-level customer service measures, which measure such things as the number of taxpayer calls answered and the average number of calls answered per full-time employee; (9) although IRS is striving to improve its overall performance measurement system, it faces some difficult challenges as it develops and implements performance measures to gauge its efforts to reduce taxpayer burden through improved customer service; (10) the key challenges GAO identified are: (a) developing a reliable measure of taxpayer burden; (b) developing measures that can be used to compare the effectiveness of the various customer service programs; and (c) refining or developing new measures that gauge the quality of the services provided; (11) it is important that IRS obtain stakeholder involvement to balance its efforts between assisting taxpayers and enforcing compliance with the tax laws; (12) IRS recognizes the limitations of its taxpayer burden measure and is looking for alternatives; and (13) at the same time, IRS will be faced with making decisions on how to minimize the costs of collecting data and measuring results over time.
During the tax filing season, IRS processes paper and electronically filed (e-filed) tax returns and validates key pieces of information, such as a taxpayer’s name and social security number. The overwhelming majority of returns are e-filed. Eligible taxpayers may use IRS’s Free File program to prepare and e-file their federal tax returns online for free.return processing, IRS offers the following services: Telephone service for tax law and account questions: Taxpayers can speak with an IRS assistor to obtain information about their accounts or to ask tax law questions. Taxpayers can also listen to recorded tax information using automated telephone menus. In 2010, we recommended that IRS determine a telephone standard based on the quality of service provided by comparable organizations, what matters most to the customer, and resources required to achieve this standard based on input from Congress and other stakeholders.saying its current process of developing a planned level of telephone service takes into consideration many factors, including its resource availability and assumptions about call demand. We noted, however, that such a standard would allow IRS to communicate to Congress what it believes constitutes good service. Furthermore, since 2010, the IRS Oversight Board—an independent body charged to provide IRS with long- term guidance and direction—has said than an acceptable level of service should be about 80 percent, but IRS has yet to set such a standard. Correspondence: IRS assistors are also responsible for responding to paper correspondence from taxpayers. IRS tries to respond to paper correspondence within 45 days of receipt; otherwise, such correspondence is considered “overage.” Minimizing overage correspondence is important because delayed responses may prompt taxpayers to write again or call. According to IRS, the top three reasons taxpayers write relate to balance due payoffs, penalty abatements, and miscellaneous account inquiries. Online services: IRS’s website is a low-cost method for providing taxpayers with basic interactive tools to, for example, check refund status, view and print transcripts, make payments and apply for installment agreements. In response to recommendations in our April 2013 report, IRS said that a long-term online strategy for improving web services will be completed in February 2015. Face-to-face assistance: Taxpayers can obtain face-to-face assistance at IRS’s 390 Taxpayer Assistance Centers (TAC), also known as walk-in sites, or at more than 13,000 sites staffed by volunteer partners. At TACs, IRS staff answer basic tax law questions, review and adjust taxpayer accounts, take payments, authenticate Individual Taxpayer Identification Number (ITIN) applicants and assist identity theft victims. At the sites staffed by volunteers, taxpayers can receive return preparation assistance as well as financial literacy information. In 2012, we reported that despite regularly realizing efficiency gains, IRS was struggling to provide quality services to taxpayers. We showed that increases in the demand for services have offset the efficiency gains and that unless IRS made tough choices about what services to provide, performance would likely continue to suffer. For fiscal year 2014, IRS reduced or eliminated certain telephone and walk-in services. IRS officials reported they chose these reductions and eliminations because taxpayers had other options for these services. Specifically, IRS took the following actions: 1. limited telephone assistance to only basic tax law questions during the filing season and reassigned assistors to work account-related inquiries; 2. launched the “Get Transcript” tool, which allows taxpayers to obtain a viewable and printable transcript on IRS.gov and redirected taxpayers to automated tools for additional guidance; 3. redirected refund-related inquiries to automated services and did not answer refund inquiries until 21 days after a tax return was filed electronically or 6 weeks after a return was filed by paper (unless the automated service directed the taxpayer to contact IRS); 4. limited access to the Practitioner Priority Service line to only those practitioners working tax account issues; 5. limited live assistance and redirected requests for domestic employer identification numbers to IRS’s online tool; and 6. eliminated free return preparation and reduced other services at IRS’s walk-in sites. To address challenges including the requirements of PPACA and other responsibilities, IRS has recently established an agency-wide risk management program office. While IRS previously had a process to manage risk as part of the annual filing season, the agency is now standing up a process to address enterprise risk. Risk management is a tool for assessing risks, evaluating alternative management approaches, deciding which management approaches to take, and then implementing and monitoring the management steps. The goal is develop a mechanism that can be used to inform agency officials and decision makers of potential risks, and evaluate alternative countermeasures to reduce risk being considered along with associated costs. IRS’s ability to identify problems and address them with countermeasures will be crucial in having a successful filing season in 2015. IRS delayed the beginning of the 2014 filing season because the government shutdown compressed the agency’s preparation time.Despite this delay, IRS officials and stakeholders—such as large tax preparation firms—reported relatively smooth processing, in part, because there were fewer tax law changes that resulted in fewer system and form updates compared to previous years. As of September 26, 2014, IRS had achieved an 85 percent e-file rate for individual returns and processed 7 percent fewer paper returns compared to last year (see figure 1). Compared to 2009, paper returns have fallen by 51 percent, from 45 million to 22 million. See appendix I for additional data on return and refund processing. As we have previously reported, continued increases in electronic filing are important because they allow taxpayers to receive refunds faster, are less prone to transcription and other errors, and reduce IRS’s costs. Between fiscal years 2009 and 2014, as seen in figure 1, the number of FTEs devoted to processing decreased by approximately 24 percent— from 11,360 in 2009 to 8,626 in 2014. Since 2003, IRS has reduced the number of paper processing sites from eight to three: Fresno, California; Kansas City, Missouri; and Austin, Texas. According to IRS officials, this consolidation has been a key element in the agency’s ongoing program to streamline operations, improve customer satisfaction, and achieve savings through reductions in rent and labor costs. Between fiscal years 2009 and 2014, as shown in figure 2, Call volume to IRS’s taxpayer service lines varied, and was the lowest in 2014. IRS attributed 2014’s overall decline in call volume in part to smooth tax return and refund processing, which results in fewer phone calls about return errors and delayed refunds. IRS also attributed the decline to its efforts to limit or eliminate assistor-based services and direct taxpayers to self-service options. See appendix III for call volume and telephone service performance and goals since fiscal year 2009. IRS significantly reduced FTEs devoted to answering telephones— from about 9,300 to about 6,900—a 26 percent decrease. IRS answered about 41 percent more calls using automated assistance. Calls answered by IRS assistors fell to its lowest level in 5 years. Answering as many calls as possible through automation is a significant efficiency gain because IRS estimates that it costs 38 cents per call to provide an automated answer, but about $42 per call to use a live assistor—a 27 percent increase over 2013 (see appendix IV).Its costs per call grew over the past year partly because there was a bigger decline in the number of calls answered than the number of FTEs assigned to answer those calls. According to IRS officials, fewer calls were answered using live assistance because of efforts to reduce assistor-based services and because there were fewer PPACA-related calls than expected. In addition, average handle time increased from 11.8 to 12.5 minutes per call (about 6 percent) from 2013 (see appendix IV). These increases highlight the importance of IRS efforts to drive people to other sources such as web-based services. More than a third of calls ended with the taxpayer hanging up, receiving a busy signal, or being disconnected before reaching an assistor. Taxpayers who cannot initially reach IRS may need to spend additional time redialing and waiting for assistance. Further, while taxpayers can obtain tax law assistance through alternative sources—such as tax attorneys or tax preparation firms—taxpayers who have account questions that only IRS can answer must either wait to get assistance via telephone or pursue help from IRS through some other means, such as through sending correspondence to IRS or visiting TACs. Figure 2 illustrates aspects of IRS telephone service. As telephone staffing fell, in fiscal year 2014 IRS’s performance in providing live telephone assistance—referred to as the level of service— remained low compared to recent years at about 64 percent (see figure This is considerably lower than the 82 percent level of service IRS 3).achieved in 2005. It is also well below the 80 percent level of service considered acceptable by the IRS Oversight Board. While during the filing season IRS’s level of service was about 70 percent or slightly higher, this rate also is still well below past performance. IRS has requested additional funding to deliver a targeted level of service. For instance, in its fiscal year 2015 congressional justification, IRS requested a 10 percent increase in FTEs to deliver a telephone level of service of 71 percent. One of IRS’s expected outcomes of its fiscal year 2014 service changes was to improve wait time for telephone service. However, in fiscal year 2014, taxpayers had to wait more than twice as long to speak with someone as they did in fiscal year 2009, when the average wait time was about 9 minutes (see figure 3). Wait times increased in part because IRS devoted fewer FTEs to answering telephones and because the average handle time increased (see appendix IV). This is important because taxpayers must call IRS or visit walk-in sites for certain account-related information that they cannot access online. For a number of years, IRS assistors have answered tax law and account-related inquiries with more than 90 percent accuracy in part because IRS uses interactive tools to help prompt and direct assistors to provide more accurate and consistent responses to taxpayers (see appendix III). This trend continued in fiscal year 2014, with a 95 percent accuracy rate for tax law inquires and a 96 percent accuracy rate for account-related inquiries (with confidence intervals of 90 percent). IRS was able to maintain this high rate of accuracy even though the government shutdown delayed hiring and caused IRS to increase its reliance on just-in-time training. In recent years, the type of telephone calls IRS receives and answers has changed. Specifically, the number of tax law inquiries has somewhat decreased and a slightly greater portion of calls are account related. This is due in part to such efforts as limiting the scope of tax law inquiries it answered in 2014. The shift in the types of calls answered by IRS assistors may explain at least in part the increase in telephone wait times as shown in figure 3. Taxpayers have alternative sources of information for tax laws inquiries, such as from tax software or a paid tax preparer. Taxpayers with questions about their accounts, however, may have no choice but to speak to an IRS assistor. Comparing performance data on calls answered by IRS assistors to the best in the business can help IRS understand taxpayer needs and improve service. Both Congress and the executive branch have taken action to improve customer service. The GPRA Modernization Act of 2010 (GPRAMA) requires agencies to, among other things, establish a balanced set of performance indicators to be used in measuring or assessing progress toward each performance goal, including, as appropriate, customer service. In addition, Executive Order 12862, Setting Customer Service Standards, requires that all executive departments and agencies that “provide significant services directly to the public shall provide those services in a manner that seeks to meet the customer service standard established” which is “equal to the best in business.” Most recently, Executive Order 13571, Streamlining Service Delivery and Improving Customer Service, was issued to strengthen customer service and required agencies to develop and publish a customer service plan, in consultation with the Office of Management and Budget (OMB). Finally, OMB has issued memorandums and guidance to agencies identifying a number of actions to improve customer service, including setting, communicating, and using customer service standards. Most recently, in July 2014, to help agency leadership focus on this issue, OMB issued guidance that agencies include additional customer service information with their fiscal year 2016 budget submissions. IRS has taken some steps towards conforming to federal customer service standards by, for example, having a suite of performance measures for its telephone and other key operations. It also has worked with its W&I research analysis group on a variety of issues including call demand forecasting. However, we found that IRS has not systematically benchmarked its telephone (customer) service to the best in business. Specifically, IRS conducted one study that focused on its level of service, benchmarking its measures to seven private and public sector organizations and enabling it to identify options for modifying the level of service measure. However, it has not regularly compared its suite of performance measures to those used by comparable organizations. Further, it has not benchmarked its actual performance against goals achieved by other organizations with large-scale call center operations to determine whether there are opportunities to improve telephone service provided by live assistors. IRS officials cite budget constraints and difficulty in identifying organizations (other than the Social Security Administration) that are comparable in size, complexity, and uniqueness as reasons they have not systematically compared IRS’s telephone service performance against the best in the business. By not comparing customer service performance against the best in the business, IRS is missing an opportunity to identify and address gaps in actual and desired service and inform Congress about resources needed to improve the level of service provided to taxpayers. We have previously reported on IRS’s budgetary constraints.resulted in fewer resources available to IRS, a better understanding of the nature and size of service gaps could help it provide the best service possible with declining resources. In 2010, we concluded that providing timely responses to paper correspondence is a critical part of taxpayer service because if IRS’s responses take too long taxpayers may write again or call IRS for additional assistance. We recommended that IRS establish a performance measure that includes providing timely correspondence service to taxpayers. IRS agreed and started using more detailed performance measures, including an overaged timeliness measure for its correspondence. These performance measures indicated that the time it takes IRS to respond to correspondence has continually increased since fiscal year 2009. Taxpayers sent IRS somewhat less correspondence between fiscal years 2013 and 2014 (21 million and 20 million, respectively), yet total overage stayed close to 50 percent during the same period. While total overage slightly increased, IRS devoted about 2 percent more FTEs for responding to correspondence (see figure 4). As noted earlier, IRS assistors are responsible for both telephone and correspondence duties. Consequently, IRS’s performance in responding to correspondence is dependent on the volume and length of telephone calls answered by assistors and the volume of work that is shifted to automated services. According to IRS officials, shifting calls to automated lines enabled IRS to better focus assistors’ efforts on taxpayer services that require live assistance. Since assistors also respond to correspondence, shifting calls also enabled them to devote more time to correspondence. IRS continued to make progress in directing more taxpayers to online resources and away from telephone and face-to-face services. Use of IRS’s website reached approximately 440 million visits for fiscal year 2014, 4 percent fewer than fiscal year 2013. IRS attributes this decrease to improved website design, which allowed visitors to accomplish their goal in fewer site visits. Use of IRS’s website was slightly lower in fiscal year 2014 than the previous year partly due to fewer forms and publications being downloaded as a result of fewer tax law changes. For the 2014 filing season, IRS launched two new self-service web applications, Get Transcript and Direct Pay: Get Transcript allows taxpayers to request and print tax transcripts online immediately. The taxpayer must first pass IRS’s authentication process. The tool will not work for taxpayers who are filing for the first time, are victims of identity theft, or cannot remember the answers to IRS’s authentication questions, such as the street address from the last tax return filed. Taxpayers may request for the transcript to be mailed, but must wait 5 to 10 days to receive it. Use of Get Transcript exceeded IRS’s estimates of about 9 million requests. In fiscal year 2014, taxpayers used the application to request or view 19 million transcripts. This resulted in IRS receiving 43 percent fewer requests through other channels. Direct Pay allows taxpayers to electronically pay their tax bills or make quarterly estimated tax payments directly from checking or savings accounts without any fees or preregistration. IRS reported it processed more than 1 million payments totaling more than $1.7 billion through Direct Pay (as of September 10, 2014). IRS expected Direct Pay to help significantly reduce the millions of paper checks received each year. IRS reported that it has made progress in addressing our previous recommendations on improving its online service strategy. Specifically, IRS reported that it was in the process of developing a long-term strategy for improving web services for taxpayers and officials expected it to be released in early 2015. A long-term comprehensive strategy for online services will help ensure IRS is maximizing the benefit to taxpayers from this investment and reduce costs in other areas, such as for IRS’s telephone operations. See appendix V for additional information on use of IRS’s website. Face-to-face assistance at IRS’s Taxpayer Assistance Centers (TAC) and volunteer sites remains an important component of IRS’s efforts to serve taxpayers, particularly those with low incomes. As part of its service changes for 2014, IRS eliminated return preparation at TACs and redirected taxpayers to volunteer sites and Free File. In fiscal year 2014, taxpayers visited TACs 5.4 million times, a decline of about 17 percent compared to the previous year. In almost half of those visits, taxpayers received assistance with account-related inquiries. Meanwhile, IRS assigned fewer field assistance staff to TACs in fiscal years 2010 through 2013 (see figure 5). At the same time IRS eliminated return preparation at TACs, taxpayers increased their use of volunteer sites and Free File. IRS’s 12,319 volunteer partner sites prepared a little more than 3.6 million tax returns in fiscal year 2014—a 7 percent increase from the previous year. Use of Free File also increased, although use prior to 2014 had been decreasing (see table 1). See appendix VI for additional information on taxpayer use of TAC and volunteer site services. Our guidance on designing evaluations states that to appropriately assess program effectiveness, outcome measures must represent the nature of the expected program benefit.cover key aspects of desired performance and should not be unduly influenced by factors outside the program’s control. In addition, to know if a program directly resulted in the desired effect, the data collection and analysis plan must establish a link between the program and the expected result. IRS identified outcomes in the form of FTE savings and other service improvements such as improving telephone level of service. However, most of those outcomes did not specify measurable goals. For example, IRS planned to improve wait time, but did not state a numeric goal for reducing wait time. As a result, IRS’s outcomes were not a clear representation of the nature of the expected program benefits. While IRS collected some data that it could use to evaluate effectiveness, it did not develop plans to analyze the data or track it in a way that would allow officials to draw causal connections and develop valid conclusions about the effectiveness of its 2014 service changes (see appendix VII for an assessment against our criteria and appendix VIII for our analysis of IRS’s service changes). Without measurable goals and other analyses, IRS could not identify whether specific service improvements were a result of its service changes or other external influences on taxpayer behavior. For example, as discussed, wait time actually increased in 2014. While IRS sought to improve wait time, without setting a numeric goal that can be measured, it does not know the extent to which it was unable to meet its goal. Moreover, IRS does not know whether the increase in wait time was a result of external factors, or if wait time would have increased even more without its service changes. Without such information, it will be difficult for Congress, IRS management, and others to understand the benefits and potential budget trade-offs associated with IRS’s service changes. This is important because IRS has identified additional ones for 2015 and beyond. In addition to maintaining the 2014 service changes, as of early September IRS had proposed the following for 2015: 1. Redesign notices to clearly state why the notice was issued; if a response is required; what action, if any, is required; and inform taxpayers about online resources and self service tools as an alternative to calling or writing the IRS. 2. Expand use of the Oral Statement Authority tool to reduce the amount of written correspondence to resolve penalty relief requests.3. Direct taxpayers who meet the Online Payment Agreement qualifications to use a tool online (and at kiosks where available) to apply and set up installment payment agreements instead of calling or visiting IRS. 4. Reduce the volume of IRS products at TACs and community outlets, including forms, instructions, and publications that are available online at IRS.gov, and encourage taxpayers to use available online sources. 5. More heavily promote electronic payment options, such as IRS Direct Pay, as an alternative to cash, check, or money order payments made at a TAC site or by mail. These service changes are examples of IRS’s efforts to promote more technology-based services to serve the maximum number of taxpayers possible more effectively and efficiently. We have previously reported that, given the volume of taxpayers calling IRS and sending correspondence, shifting taxpayers to self-service tools, such as interactive automated telephone lines or IRS’s website, is key to improving taxpayer services.then fewer will need to speak to an IRS assistor and IRS’s costs will fall. According to the Commissioner of Internal Revenue, implementation of new tax laws such as PPACA combined with a tight budget and the possibility of Congress passing a late package of tax extenders threatens to make 2015 “the most complicated filing season before us in a long time, if ever.” At the enterprise level, in February 2014, IRS undertook a new approach to risk management in response to management failures related to applications for tax exempt status. In that effort, IRS established an Enterprise Risk Management (ERM) process that formalizes risk management across the organization. As part of the ERM process, IRS created several risk registers, including an enterprise risk register that consists of 15 broad emerging risk categories. The top two risks IRS identified were (1) staffing and training and (2) budget sufficiency. IRS ranked the emerging risk categories by highest to lowest likelihood and impact. Also as part of ERM, IRS developed two other enterprise-wide risk registers, one covering PPACA and the other FATCA. In addition, IRS developed a risk register for its W&I division, which is responsible for the filing season. The PPACA, FATCA, and W&I risk registers identify risks and list management activities that might affect the filing season. IRS plans to expand risk management procedures at the process level, such as filing season operations, in the future. IRS’s ERM guidance notes that divisions and offices should determine appropriate risk management activities. Separate from the ERM process, IRS has a long-standing process to ensure filing season readiness. This process strongly mirrors our framework for risk management—including identifying risks, developing management actions, getting management concurrence, and implementing and monitoring those actions. The FSR Action Plan contains specific steps to undertake in response to critical tasks, given the budgetary constraints under which the filing season operates. While there is no established universally agreed upon set of requirements or processes for a risk management framework, we have previously developed one that can be used to inform agency officials and decision makers of the basic components of a risk management system or which can be used as a stand-alone guide. Consistent with our framework, risk management activities should evaluate alternative countermeasures to reduce risk being considered along with associated costs. Furthermore, evaluating alternative countermeasures should include specific countermeasures to reduce risk. Table 2 shows the results of our analysis of IRS enterprise risk management procedures and filing season procedures. As table 2 shows, IRS has made good progress in setting up its risk management process. In some cases we found it was too early to assess whether the ERM process meets the criteria. For example, the status of monitoring efforts is unknown because the risk management effort is still in the early stages. However, in the alternatives evaluation stage, we found IRS has an opportunity to strengthen the ERM process. The criteria of this stage are specific countermeasures to reduce risk, use of external sources to improve decision making, and cost-benefit analysis of countermeasures. According to IRS’s own guidance, management plans should be detailed and contain the following information: all areas that could be impacted if an adverse event occurred; all activities required to effectively reduce likelihood, impact, or both; critical or due dates, external dependencies and activity ownership; and residual risk (risk left after management activities) and actions taken to address residual risks. IRS has not developed specific countermeasures as part of its risk management activities. Instead, in many cases, the management activities rely on decision making once the adverse event occurs rather than providing an explicit course of action (see appendix IX for examples). IRS’s proposed activities lack specific countermeasures to address the risk, but rather emphasize opportunities to assess the adverse event as it unfolds. For example, IRS has identified delays in creating PPACA forms as a high impact risk that may occur for the 2015 filing season, but has not identified specific countermeasures for addressing this likely event. Developing specific countermeasures would allow IRS to better address likely risks by either reducing the probability an event may occur or by managing the effects of an adverse event. Officials in W&I said they have not yet fully developed alternatives because they have focused on becoming familiar with risk management processes. An official in IRS’s ERM office stressed the importance of prioritizing risks so as to most efficiently direct the use of resources to develop management activities. In April, IRS undertook what officials described as a “temperature check” that had the aim of assessing current risks and indentifying additional risks to provide a high-level overview of risks within the business units and aggregating them at the enterprise level. These risks were listed in various risk registers and include management activities as well as a link to the emerging risk categories. IRS officials acknowledged that the risk management strategies need further refinement. We have reported on IRS budget cuts and the uncertainty the cuts cause, but simply ranking perennial concerns such as staffing and budget may delay the development of management activities on other impending potential risks and loses sight of why risk management is important. The reason for risk management is to provide options to tackle risks in a budget constrained environment. As mentioned earlier, many of the management activities listed in the risk registers envision meetings to discuss a response to an unfolding risk situation. These types of meetings should be held ahead of time and a range of scenarios should be considered with appropriate response. Then, when a risk situation unfolds, specific countermeasures or a roadmap is already in place to guide the response. Without specific countermeasures, including a cost-benefit analysis of options, IRS is unable to provide clear guidance for implementing or prioritizing countermeasures, which could hamper its ability to respond to adverse events that could affect filing season operations. Since 2010, we have issued seven reports on various aspects of IRS’s filing season activities. These reports included 20 recommendations for IRS to improve filing season operations, become more efficient, and provide taxpayers with better customer service. IRS has implemented six of the recommendations. Both taxpayers and IRS have received benefits as a result of IRS’s implementation of our recommendations. For example, IRS implemented our 2011 recommendation to offer an automated telephone line that gives taxpayers the status of their amended tax return. This telephone line provided taxpayers with faster service because they did not have to wait for a live person to assist them. The phone line also enabled IRS to reduce costs and make better use of its available resources because it lowered taxpayer demand to talk to a live assistor. Table 6 in appendix X summarizes the filing season recommendations implemented by IRS. Tables 7 and 8 in appendices XI and XII summarize the 14 recommendations IRS has yet to implement. IRS officials told us they fully agreed with seven of the recommendations, agreed in part with one, disagreed with two, and neither agreed nor disagreed with four. IRS is taking or plans to take certain actions to implement the recommendations that it agreed with or did not disagree with. Six of the 14 recommendations relate to improving web services. Specifically, in April 2013, we recommended that the IRS should develop a long-term strategy to improve web services provided to taxpayers (see appendix XII). Since 2008, we have raised five matters for Congress to consider that make changes to the ability of IRS to use its math error authority (MEA) to quickly correct errors without the need for an audit. Without a specific grant of MEA authority, IRS must use audit procedures to correct errors before it could issue a statutory notice of deficiency. Congress took our 2009 suggestion and provided IRS with MEA so it could automatically verify taxpayers’ compliance with payback provisions for the 2008 First- Time Homebuyer Credit. With this authority, IRS was able to adjust taxpayers’ refunds if they had not complied with the payback provision. As a result, from fiscal years 2010 through 2013, IRS prevented over $500 million in improper refunds from being sent to taxpayers. Congress has not yet acted on the other four matters we have asked Congress to consider related to MEA. All five matters are summarized in table 9 in appendix XIII. IRS continues to struggle with providing services to taxpayers. It is caught between declining resources on one hand and increasing statutory responsibilities and growing demand from taxpayers on the other. As we have discussed for several years, IRS needs to do two things—first, it needs to ensure that available resources are utilized as effectively as possible by identifying opportunities to improve services, and second, it needs to make tough choices about which services to continue providing. One way IRS could more effectively use available resources is by benchmarking telephone service to the best in business. The importance of providing high quality customer service is driven by the requirements of GPRA, GPRAMA and executive orders, and OMB guidance and memorandums which emphasize the relationship of customer service to agency performance and outcomes. While IRS has taken some steps to improve telephone service, it has not systematically and periodically compared its service to the best in business. As a result, IRS is not benefiting as much as it could from the roughly 7,000 FTEs it is devoting to telephone service. Further, because IRS uses the same staff for telephone and correspondence means that it limits IRS’s ability to work paper correspondence as effectively as possible as well. In one effort to more effectively use resources in 2014, IRS made decisions to reduce or cut services. While its stated goal was to improve service, IRS did not identify the improvements it hoped to achieve. As a result, IRS is unable to determine the effectiveness of the changes or make informed decisions about additional service changes in 2015 and beyond. Another way that IRS is trying to better manage its constrained resources is through its new process for identifying and managing enterprise-wide risks. If well executed, IRS’s efforts should help the agency allocate resources and take actions under conditions of uncertainty when implementing PPACA. IRS has made good progress in setting up its risk management process. While it is too early to assess some aspects of the risk management framework, there is one area where IRS could take action to better prepare itself for risks. The risk management strategies IRS identified lack specifics, which could hamper IRS’s ability to respond in the event a risk occurs. More specific countermeasures would better position IRS to either reduce the probability of an adverse event or manage the consequences if an adverse event occurs. The Commissioner of the Internal Revenue should direct the appropriate officials to take the following three actions: Systematically and periodically compare its telephone service to the best in business to identify gaps between actual and desired performance. Develop outcomes that are measurable and plans to analyze service changes that allow valid conclusions to be drawn so that information can be conveyed to Congress, IRS management, and others about the effectiveness of IRS’s service changes and impact on taxpayers. Include specific countermeasures or options in risk management plans that could guide a response when an adverse event occurs. We provided a draft of this report to the Commissioner of Internal Revenue. IRS provided written comments on a draft of the report, which are reprinted in Appendix XIV. IRS also suggested technical changes to the report, which we incorporated where appropriate. IRS disagreed with the recommendation to systemically and periodically compare its telephone service to the best in business, stating that the differences between the purposes of IRS’s telephone operations and public sector contact centers are too significant to yield useful results when broadly compared to each other. We disagree that IRS’s telephone operations cannot be compared to others. IRS notes, and we report, that the agency has conducted targeted benchmarking against the best in the business which it believes was helpful in identifying gaps and potentially improving performance. Specifically, IRS benchmarked one measure of telephone service—level of service—to both private and public sector organizations which allowed it to identify options for modifying that measure. However, IRS uses more than one measure to more fully evaluate its telephone performance. Benchmarking all of those measures alongside each other (and potentially others it does not currently use) to the best in the business could help improve taxpayer service. Further, as we note in our report, the criteria allow for comparison to private organizations providing analogous services, not just those that are exactly comparable. Comparisons of telephone service to the best in business can help inform Congress about resources needed to improve the level of service provided to taxpayers. Accordingly, we believe this recommendation remains valid and should be implemented. IRS agreed with our recommendation to develop outcomes that are measurable and analyze service changes to allow for valid conclusions to be drawn. To accomplish this, IRS said that it will develop projections to assess the effect of service changes during the 2015 filing season, allowing it to determine the amount of resources that are reallocated as a result of the service changes and to assess the taxpayer experience and address opportunities for improvement. IRS reported it anticipates completing this analysis by the end of fiscal year 2015. IRS did not state whether it agreed or disagreed with our recommendation to include specific countermeasures in its risk management plans. Our report acknowledges the progress that IRS has made implementing its ERM program. For the risk registers we reviewed, IRS had assessed the probability and impact of the risks listed, meaning that it had prioritized its risks; however, it had not developed specific countermeasures. Without specific countermeasures, risk assessment becomes a fruitless exercise. Without countermeasures specifically articulated in risk management plans, IRS’s ability to respond quickly and appropriately to an adverse event may be hampered. Therefore, we believe IRS should implement our recommendation and include specific countermeasures in its risk management plans. We plan to send copies of this report to the appropriate congressional committees. We will also send copies to the Commissioner of Internal Revenue, the Secretary of the Treasury, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XV. Continued increases in e-filing are important because processing costs are lower for e-file returns and refunds are issued faster. For the 2014 filing season (through early March), on average IRS reported it processed direct deposit refunds for e-filed returns within 9 days. The numbers in the table are the total automated, assistor answered, abandoned, busy and disconnected calls, and do not reflect the total number of attempted calls to IRS. IRS calculation based on representative samples of phone calls from January 1 through June 30. The percentage of calls in which assistors provided accurate answers for the call type and took the appropriate actions, with a 90 percent confidence interval. Self-service tools – assistance with tax filing and payments Taxpayer forms, publications, and instructions downloads (in millions) Interactive Tax Assistance Tools (completion in thousands) Direct Pay Transactions (in millions) Volunteer site locator (in thousands) Self-service tools – other requests for information Where’s My Refund? (completion in millions) Electronic Filing Personal Information Number Request (completion in millions) Where’s My Amended Return (in millions) To identify criteria appropriate for assessing the effectiveness of IRS’ six service changes, we reviewed our guidance on designing evaluations and assessing program effectiveness and used this criteria to evaluate IRS’s efforts to evaluate the effectiveness of changes to taxpayer services. Our analysis shows that IRS met some of the criteria, but did not meet the criteria for selecting outcome measures and developing an analysis plan. Most of the expected outcomes that IRS identified were not specific or measureable goals. IRS had some data that it could have used to assess the effectiveness of its 2014 service changes. We compiled this information to compare the reported results of the service changes against IRS’s expected outcomes. IRS risk Patient Protection and Affordable Care Act (PPACA) risk register Wage and Investment (W&I) resources diverted from working paper inventories due to PPACA phone traffic If PPACA-related increase in phone traffic occurs, then W&I will have to divert resources from working paper inventories to manning the telephone lines, resulting in reduced levels of customer service decreased customer and employee satisfaction, and increased costs. Research Organizations to determine the optimal balance of resources to provide customer service. 4. Adjust the Compliance work 4. Not specific — plan to move some audit starts later in the year to reduce the impact of mail. 5. New PPACA systems are being engineered to allow opportunities to change thresholds/work level receipts. 5. Not specific – Systems 6. W&I Service Delivery Approach to evaluate services to ensure delivery of core service and to drive service to most efficient delivery method. Evaluate trade-off analysis and what to “stop doing.” Filing Season 2015 Approach is under development. IRS risk description If late or retroactive PPACA legislation occurs and shortens the planning process and/or results in rework, or if additional Presidential Executive Orders are issued to delay or alter original enacted legislation, then late changes to requirements and a delay to the Filing Season could result in decreased public confidence, decreased customer service and customer satisfaction, diminishing chances for successful implementation and introducing a reputational risk. Analysis Tracking and Implementation Services representative who attends regular meetings to determine the impact of new or expiring legislation. PPACA IRS Counsel resources to support decision making. staff to support future discussion 4. A PPACA dedicated staff 4. Not specific — Assigned identified at PPACA Program Management Office at Headquarters, Office of Program Coordination and Integration (OPCI) in W&I, the Electronic Invoice Presentation and Payment team and the PPACA Joint Implementation Teams to support organizational readiness activities. IRS risk description If development of PPACA forms (other than PPACA Information Return products) does not comply with existing timeframe of Filing Season Readiness Approach, then Unified Work Requests, training development and delivery, and IRM updates may all be delayed - forms readiness drives all. PPACA IRS Counsel resources to support decision making. staff to support future discussion 3. Significant external partner 3. Not specific – Dialogue relationships to socialize upcoming changes and robust communication strategy. 4. Repeatable, highly effective FSR planning process for a holistic look at upcoming events and legislation and impact to people, process and technology. Foreign Account Tax Compliance Act (FATCA) risk register Timing for Form 8966 submitter requirements due to International Compliance Management Model (ICMM) schedule could prevent appropriate execution of electronic filing Help Desk mission to successfully answer calls and inquiries Development of the outputs from ICMM has not begun. Electronic Products and Services Support training is dependent upon the finalization of submitter responses (if we do not have the electronic filing of the 8966, then there will be no work for the Help Desk to do). As of right now, it is developing ICMM as the technological developments to enable electronic filing. The impact to electronic filing is lack of sufficient lead time to develop and deliver training by deployment date. Could have already hired and trained. 1. New hires typically come on 1. Not specific – Training board in October. Recalls are brought back on September/October training is typically completed in 2 weeks. Cannot complete training without knowing the ICMM process. FATCA training would have to be delivered separately in a compressed time frame. Make training room reservations now. 2. Train assistors to do other 2. Not specific — work if Form 8966 does not materialize. 3. Reallocate inventory and 3. Not specific — revise resource work plan. IRS risk description The ICMM program will be contracted out with a contract award date of August 8, 2014. To meet the January 1 “go-live” date, there is a significant workload to accomplish in a very short period of time once the vendor is on board. If the ICMM system is not running by January 1, 2015, then Form 8966 needs to come through via paper which poses a risk to Submission Processing. The impact of this would be no electronic filing of Form 8966; the form will be on paper instead, which will cause a resource constraint. GAO analysis 1. Not specific —Dialogue to testing and integration with W&I systems and processes and the timeframe of the testing. 2. The contingency plan for Submission Processing to incorporate paper processing F8966 into existing work is the following. When faced with over-receipts, the following actions are taken as a standard practice: —1) Receipts are monitored to determine if/when extra manpower is needed —2) People are recruited from other areas that may not be facing high volumes in their work at the time. 3. Obtain research on what the 3. Not specific – Monitoring paper volumes would be - projected volume is 100,000. IRS risk description If significant changes to, and/or expansion of, the breadth and complexity of the IRS’s mission continues then IRS will be unable to either execute on its core mission or implement the required changes in a complete, accurate, timely and/or efficient manner. Executive Steering Committee in place to comprehensively evaluate upcoming events and legislation that impacts filing season looking holistically at people, process and technology for Filing Season 2015. 3. Significant external partner 3. Not specific – Dialogue relationship to socialize upcoming changes and robust communication strategy. 4. Established quantitative and qualitative performance measures that support and reinforce the achievement of the IRS mission and overall strategic goals. 5. FATCA implementation - 5. Not specific – Monitoring oversight provided by OPCI. IRS risk description If service delivery options are not evaluated and adjusted to maximize return on diminishing resources then mission creep and scope expansion may impact ability to achieve core mission. Approach - W&I Service Delivery Approach to evaluate services to ensure delivery of core service and to drive service to most efficient delivery method including evaluation of trade- off analysis and what to “stop doing.” 2. Significant external partner 2. Not specific – Dialogue relationship to socialize upcoming changes and robust communication strategy. 3. Established quantitative and qualitative performance measures that support and reinforce the achievement of the IRS mission and overall strategic goals. 4. Service On Demand project to evaluate service delivery channels moving forward. 4. Since 2010, we have issued seven reports on various aspects of IRS’s filing season operations. These reports included 20 recommendations for IRS to improve filing season operations, become more efficient, and provide taxpayers with better customer service. Listed below are the six recommendations that IRS has implemented. Since 2010, we have issued seven reports on various aspects of IRS’s filing season operations. These reports included 20 recommendations for IRS to improve filing season operations, become more efficient, and provide taxpayers with better customer service. IRS has not yet implemented 14 of the recommendations. Eight of these recommendations are described in Table 7 below, and we discuss the other six recommendations to improve IRS’s web-related services in appendix XII. As shown in table 7, IRS agreed with three of these recommendations, disagreed with two, and has taken certain actions in regard to the three with which it neither agreed nor disagreed. Six of the 14 open recommendations relate to improving web services. Specifically, we recommended that IRS should develop a long-term strategy to improve web services provided to taxpayers. We include these separately because we were subsequently asked to review IRS efforts to offer more interactive web services. IRS agreed with four of these recommendations, partially agreed with developing business cases because it believes other criteria should be considered, and did not take a position on establishing numerical measures. These six unimplemented recommendations are listed in table 8. For almost a century, Congress has been expanding IRS’s math error authority (MEA) on a case-by-case basis. Currently, there are 13 situations where IRS can use MEA to make corrections to tax returns. Using MEA can save time and money for taxpayers and can reduce the need for audits to correct taxpayer errors. Since 2008, we have raised five matters for the Congress to consider providing IRS with additional MEA authority. The Congress has enacted one and has not yet acted on four of them. In addition to the individual named above, Joanna Stamatiades, Assistant Director; LaKeshia Allen-Horner; Jehan Chase; Robert Gebhart; George Guttman; Kirsten Lauber; Natalie Maddox; Mark Ryan; Erin Saunders- Rath; Angela Smith; and Elwood White made key contributions to this report.
During the filing season, IRS processes tax returns, issues refunds, and provides telephone, correspondence, online, and face-to-face service. GAO has reported that in recent years IRS has absorbed significant budget cuts and struggled to provide quality service. In response, IRS has taken steps, including eliminating some services and implementing a new risk management process. GAO assessed IRS's (1) 2014 filing season performance, including how it compares itself to best practices; (2) efforts to evaluate the effectiveness of 2014 service changes; and (3) actions to manage risk for filing season operations, among other objectives. GAO analyzed IRS documents and data, visited IRS facilities, and interviewed IRS officials and external stakeholders. The Internal Revenue Service's (IRS) processing of tax returns was timely, even though the filing season was delayed due to the 2013 government shutdown. Continued growth in e-filing allows IRS to reduce costs and issue refunds faster. Although IRS received fewer calls in 2014, the percentage of callers seeking help who received it remained low and wait times remained high compared to prior years. One way to improve taxpayer telephone service is to compare it to the best in business, as required by Congress and executive orders. However, IRS has not systematically made such a comparison for its telephone service because of budget constraints and difficulty in identifying comparable organizations, according to IRS officials. By not comparing itself to other call center operations, IRS is missing an opportunity to identify and address gaps between actual and desired service, and inform Congress about resources needed to close the gap. More efficient telephone service could help improve correspondence service because the same staff provides those services. IRS did not set numerical goals—such as a reduction in wait time—or develop a plan to assess the effects of its 2014 service changes. Such information would help Congress, IRS managers, and others understand the benefits and potential budget tradeoffs associated with IRS service changes. This is important because IRS has identified additional service changes for 2015 and beyond. IRS used its new enterprise-wide risk management approach to identify risks such as staffing and training. IRS has made good progress in setting up its risk management process. However, while risks were identified and countermeasures discussed, such as contingency plans and workload adjustments, most countermeasures were not specific. Without specific countermeasures identified in advance, IRS's ability to respond to adverse events may be hampered. GAO recommends IRS systematically compare telephone service to the best in business, develop measures and a plan to analyze service changes, and include specific countermeasures in risk management plans. IRS disagreed with comparing its telephone service to the best in business stating it (1) is not comparable to other organizations and (2) has done targeted comparisons. GAO disagrees. In its view, the recommendation remains valid and benchmarking all aspects of service to the best in business could help IRS improve its service. IRS agreed to develop measures and a plan to analyze service changes. It neither agreed nor disagreed to include specific countermeasures in its risk management plans.
Students with limited English proficiency are a diverse and complex group. They speak many languages and have a tremendous range of educational needs and include refugees with little formal schooling and students who are literate in their native languages. Accurately assessing the academic knowledge of these students in English is challenging. If a student responds incorrectly to a test item, it may not be clear if the student did not know the answer or misunderstood the question because of language barriers. Title I of NCLBA requires states to administer tests in language arts and mathematics to all students in certain grades and to use these tests as the primary means of determining the annual performance of states, districts, and schools. These assessments must be aligned with the state’s academic standards—that is, they must measure how well a student has demonstrated his or her knowledge of the academic content represented in these standards. States are to show that increasing percentages of students are reaching the proficient level on these state tests over time. NCLBA also requires that students with limited English proficiency receive reasonable accommodations and be assessed, to the extent practicable, in the language and form most likely to yield accurate data on their academic knowledge. In addition, for language arts, students with limited English proficiency who have been in U.S. schools for 3 years or more must generally be assessed in English. Finally, NCLBA also created a new requirement for states to annually assess the English language proficiency of students identified as having limited English proficiency. Accurately assessing the academic knowledge of students with limited English proficiency has become more critical because NCLBA designated specific groups of students for particular focus. These four groups are students who (1) are economically disadvantaged, (2) represent major racial and ethnic groups, (3) have disabilities, and (4) are limited in English proficiency. These groups are not mutually exclusive, so that the results for a student who is economically disadvantaged, Hispanic, and has limited English proficiency could be counted in three groups. States and school districts are required to measure the progress of all students in meeting academic proficiency goals, as well as to measure separately the progress of these designated groups. To make adequate yearly progress, each district and school must generally show that each of these groups met the state proficiency goal and that at least 95 percent of students in each group participated in these assessments. Students with limited English proficiency are a unique group under NCLBA because once they attain English proficiency they are no longer counted as part this group, although Education has given states some flexibility in this area. Recognizing that language barriers can hinder the assessment of students who have been in the country for a short time, Education has provided some testing flexibility. Specifically, Education does not require students with limited English proficiency to participate in a state’s language arts assessment during their first year in U.S. schools. In addition, while these students must take a state’s mathematics assessment during their first year, a state may exclude their scores in determining whether it met its progress goals. Title III of NCLBA focuses specifically on students with limited English proficiency, with the purpose of ensuring that these students attain English proficiency and meet the same academic standards as other students. This title holds states and districts accountable for student progress in attaining English proficiency by requiring states to establish goals to demonstrate annual increases in both the number of students attaining English proficiency and the number making progress in learning English. States must establish English language proficiency standards that are aligned with a state’s academic standards in order to ensure that students are acquiring the academic language they need to successfully participate in the classroom. Education also requires that a state’s English language proficiency assessment be aligned to its English language proficiency standards. While NCLBA requires states to administer academic assessments to students in some grades, it requires states to administer English language proficiency assessments annually to all students with limited English proficiency, from kindergarten to grade 12. In nearly two-thirds of the 48 states for which we obtained data, students with limited English proficiency did not meet state proficiency goals in the 2003-2004 school year. Students with limited English proficiency met goals in language arts and mathematics in 17 states. In 31 states, these students missed the goals either for language arts or for both language arts and mathematics (see fig. 1). In 21 states, the percentage of proficient students in this group was below both the mathematics and the language arts proficiency goals. We found that the percentage of elementary school students with limited English proficiency achieving proficient scores on the state’s mathematics assessment was lower than that for the total student population in 48 of 49 states that reported to Education in school year 2003-2004. We also found that, in general, a lower percentage of students with limited English proficiency achieved proficient test scores than other selected student groups. All of the 49 states reported that these students achieved lower rates of proficiency than white students. The performance of limited English proficient students relative to the other student groups varied. In 37 states, for example, economically disadvantaged students outperformed students with limited English proficiency, while students with disabilities outperformed these students in 14 states. Officials in the 5 states we studied reported that they have taken steps to address challenges associated with academic assessments of students with limited English proficiency. However, Education’s peer reviews of 38 states found a number of concerns in assessing these students. Our group of experts indicated that states are generally not taking the appropriate set of comprehensive steps to create valid and reliable assessments for students with limited English proficiency. To increase validity and reliability, most states offered accommodations to students, such as providing extra time to complete the test and offering native language assessments. However, offering accommodations may or may not improve the validity of test results, as research in this area is lacking. Officials in 5 states we studied reported taking some steps to address challenges associated with assessing students with limited English proficiency. Officials in 4 of these states reported following generally accepted test development procedures, while a Nebraska official reported that the state expects districts to follow such procedures. Officials in California, New York, North Carolina, and Texas told us that they try to implement the principles of universal design, which support making assessments accessible to the widest possible range of students. This is done by ensuring that instructions, forms, and questions are clear and not more linguistically complex than necessary. In addition, officials in some states reported assembling committees to review test items for bias. For example, when developing mathematics items, these states try to make language as clear as possible to ensure that the item is measuring primarily mathematical concepts and to minimize the extent to which it is measuring language proficiency. A mathematics word problem involving subtraction, for example, might refer to fish rather than barracuda. Officials in 3 of our study states told us they also used a statistical approach to evaluate test items for bias related to students with limited English proficiency. Education’s completed NCLBA peer reviews of 38 states found that 25 did not provide sufficient evidence on the validity or reliability of results for students with limited English proficiency. For example, in Idaho, peer reviewers commented that the state did not report reliability data for students with limited English proficiency. As of March 2007, 18 states have had their assessment systems fully approved by Education. Our group of experts indicated that states are generally not taking the appropriate set of comprehensive steps to create valid and reliable assessments for these students and identified essential steps that should be taken. These experts noted that no state has implemented an assessment program for students with limited English proficiency that is consistent with technical standards. They noted that students with limited English proficiency are not defined consistently within and across states, which is a crucial first step to ensuring reliability. If the language proficiency levels of these students are classified inconsistently, an assessment may produce results that appear inconsistent because of the variable classifications rather than actual differences in skills. Further, it appears that many states do not conduct separate analyses for different groups of limited English proficient students. Our group of experts indicated that the reliability of a test may be different for heterogeneous groups of students, such as students who are literate in their native language and those who are not. Further, these experts noted that states are not always explicit about whether an assessment is attempting to measure skills only (such as mathematics) or mathematics skills as expressed in English. According to the group, a fundamental issue affecting the validity of a test is the definition of what is being measured. The expert group emphasized that determining the validity and reliability of academic assessments for students with limited English proficiency is complicated and requires a comprehensive collection of evidence rather than a single analysis. In addition, the appropriate combination of analyses will vary from state to state, depending on the characteristics of the student population and the type of assessment. The group indicated that states are not universally using all the appropriate analyses to evaluate the validity and reliability of test results for students with limited English proficiency. These experts indicated that some states may need assistance to conduct appropriate analyses. Finally, they indicated that reducing language complexity is essential to developing valid assessments for these students, but expressed concern that some states and test developers do not have a strong understanding of universal design principles or how to use them to develop assessments that eliminate language barriers to measuring specific skills. The majority of states offered some accommodations to try to increase the validity and reliability of assessment results for students with limited English proficiency. These accommodations are intended to permit students to demonstrate their academic knowledge, despite limited language ability. Our review of state Web sites found documentation on accommodations for 42 states. The number of accommodations offered varied considerably among states. The most common accommodations were allowing the use of a bilingual dictionary and reading test items aloud in English (see table 1). Some states also administered assessments to small groups of students or individuals, while others gave students extra time to complete a test. According to our expert group and our review of literature, research is lacking on what specific accommodations are appropriate for students with limited English proficiency, as well as their effectiveness in improving the validity of assessment results. A 2004 review of state policies found that few studies focus on accommodations intended to address the linguistic needs of students with limited English proficiency or on how accommodations affect the performance of students with limited English proficiency. In contrast, significantly more research has been conducted on accommodations for students with disabilities, much of it funded by Education. Because of this research disparity, our group of experts reported that some states offer accommodations to students with limited English proficiency based on those they offer to students with disabilities, without determining their appropriateness for individual students. They noted the importance of considering individual student characteristics to ensure that an accommodation appropriately addresses the needs of the student. In our survey, 16 states reported that they offered statewide native language assessments in language arts or mathematics in some grades for certain students with limited English proficiency in the 2004-2005 school year. For example, New York translated its statewide mathematics assessments into Spanish, Chinese, Russian, Korean, and Haitian-Creole. In addition, 3 states were developing or planning to develop a native language assessment. Our group of experts told us that this type of assessment is difficult and costly to develop. Development of a valid native language assessment involves more than a simple translation of the original test. In most situations, a process of test development and validation similar to that of the nontranslated test is recommended. In addition, the administration of native language assessments may not be practicable, for example, when only a small percentage of limited English proficient students in the state speak a particular language or when a state’s student population has many languages. Members of our expert group told us that native language assessments are generally an effective accommodation only for students in specific circumstances, such as students who are instructed in their native language or are literate in their native language. Thirteen states offered statewide alternate assessments (such as reviewing a student’s classroom work portfolio) in 2005 for certain students with limited English proficiency, as of March 2006. Our expert group noted that alternate assessments are difficult and expensive to develop, and may not be feasible because of the amount of time required for such an assessment. Members of the group also expressed concern about the extent to which these assessments are objective and comparable and can be aggregated with regular assessments. Many states implemented new English language proficiency assessments for the 2005-2006 school year to meet Education’s requirement for states to administer English language proficiency tests that meet NCLBA requirements by the spring of 2006. These assessments must allow states to track student progress in learning English. Additionally, Education requires that these assessments be aligned to a state’s English language proficiency standards. Education officials said that because many states did not have tests that met NCLBA requirements, the agency funded four state consortia to develop new assessments that were to be aligned with state standards and measure student progress. In the 2005-2006 school year, 22 states used assessments or test items developed by one of four state consortia, making this the most common approach taken by states. Eight states worked with test developers to augment off-the-shelf English language proficiency assessments to incorporate state standards. Officials in 14 states indicated that they are administering off-the-shelf assessments. Seven states, including Texas, Minnesota, and Kansas, created their own English language proficiency assessments. Officials in these states said they typically worked with a test developer or research organization to create the assessments. Officials in our study states and test developers we interviewed reported that they commonly apply generally accepted test development procedures to develop their assessments, but some are still in the process of documenting their validity and reliability. A 2005 review of the documentation of 17 English proficiency assessments used by 33 states found that the evidence on validity and reliability was generally insufficient. The study, which was funded by Education, noted that none of the assessments contained “sufficient technical evidence to support the high-stakes accountability information and conclusions of student readiness they are meant to provide.” Education has offered states a variety of technical assistance to help them appropriately assess students with limited English proficiency, such as providing training and expert reviews of their assessment systems. However, Education has issued little written guidance on how states are expected to assess and track the English proficiency of these students, leaving state officials unclear about Education’s expectations. While Education has offered states some flexibility in how they incorporate these students into their accountability systems, many of the state and district officials we interviewed indicated that additional flexibility is needed to ensure that academic progress of these students is accurately measured. Education offers support in a variety of ways to help states meet NCLBA’s assessment requirements for students with limited English proficiency. The department’s primary technical assistance efforts have included the following: Title I peer reviews of states’ academic standards and assessment systems: During these reviews, experts review evidence provided by the state about the validity and reliability of these assessments. Education shares information from the peer review to help states address issues identified during the review. Title III monitoring visits: Education began conducting site visits to review state compliance with Title III requirements in 2005. As part of these visits, the department reviews the state’s progress in developing English language proficiency assessments that meet NCLBA requirements. Comprehensive centers: Education has contracted with 16 regional comprehensive centers to build state capacity to help districts that are not meeting their adequate yearly progress goals. At least 3 of these centers plan to assist individual states in developing appropriate goals for student progress in learning English. In 2005, Education also funded an assessment and accountability comprehensive center, which provides technical assistance related to the assessment of students, including those with limited English proficiency. Ongoing technical assistance for English language proficiency assessments: Education has provided information and ongoing technical assistance to states using a variety of tools and has focused specifically on the development of the English language proficiency standards and assessments required by NCLBA. While providing this technical assistance, Education has issued little written guidance on developing English language proficiency assessments that meet NCLBA’s requirements and on tracking the progress of students in acquiring English. Education issued some limited nonregulatory guidance on NCLBA’s basic requirements for English language proficiency standards and assessments in February 2003. However, officials in about one-third of the 33 states we contacted expressed uncertainty about implementing these requirements. They told us that they would like more specific guidance from Education to help them develop tests that meet NCLBA requirements, generally focusing on two issues. First, some officials said they were unsure about how to align English language proficiency standards with content standards for language arts, mathematics, and science, as required by NCLBA. Second, some officials reported that they did not know how to use the different scores from their old and new English language proficiency assessments to track student progress. Without guidance and specific examples on both of these issues, some of these officials were concerned that they will spend time and resources developing an assessment that may not meet Education’s requirements. Education officials told us that they were currently developing additional nonregulatory guidance on these issues, but it had not yet been finalized. Education has offered states several flexibilities in tracking academic progress goals for students with limited English proficiency to support their efforts to develop appropriate accountability systems for these students. For example, students who have been in U.S. schools for less than a year do not have to meet the same testing requirements as other students. Another flexibility recognizes that limited English proficiency is a more transient quality than being of a particular race. Students who achieve English proficiency leave the group at the point when they demonstrate their academic knowledge in English, while new students with lower English proficiency are constantly entering the group (see fig. 2). Given the group’s continually changing composition, meeting progress goals may be more difficult than doing so for other student groups, especially in districts serving large numbers of these students. Consequently, Education allowed states to include, for up to 2 years, the scores of students who were formerly classified as limited English proficient when determining whether a state met its progress goals for students with limited English proficiency. Several state and local officials in our study states told us that additional flexibility would be helpful to ensure that the annual progress measures provide meaningful information about the performance of students with limited English proficiency. Officials in 4 of the states we studied suggested that certain students with limited English proficiency should be exempt from testing or have their test results excluded for longer periods than is currently allowed. Several officials voiced concern that some of these students have such poor English skills or so little previous school experience that assessment results do not provide any meaningful information. Instead, some of these officials stated that students with limited English proficiency should not be included in academic assessments until they demonstrate appropriate English. However, the National Council of La Raza, a Hispanic advocacy organization, has voiced concern that excluding too many students from a state’s annual progress measures will allow some states and districts to overlook the needs of these students. With respect to including the scores of students previously classified as limited English proficient for up to 2 years, officials in 2 of our 5 study states, as well as one member of our expert group, thought it would be more appropriate for these students to be counted in the limited English proficient group throughout their school careers—but only for accountability purposes. They pointed out that by keeping students formerly classified as limited English proficient in the group, districts that work well with these students would see increases in the percentage who score at the proficient level in language arts and mathematics. An Education official explained that the agency does not want to label these students as limited English proficient any longer than necessary. Education officials also noted that including all students who were formerly limited English proficient would inflate the achievement measures for this group. District officials in 4 states argued that tracking the progress of individual students in this group is a better measure of how well these students are progressing academically. Officials in one district pointed to a high school with a large percentage of students with limited English proficiency that had made tremendous progress with these students, doubling the percentage of students achieving academic proficiency. The school missed the annual progress target for this group by a few percentage points, but school officials said that the school would be considered successful if it was measured by how much individual students had improved. In response to educators and policymakers who believe such an approach should be used for all students, Education initiated a pilot project in November 2005, allowing a limited number of states to incorporate measures of student progress over time in determining whether districts and schools met their annual progress goals. We made several recommendations to Education in our July 2006 report. Specifically, we recommended that Education support additional research on appropriate accommodations for these students and disseminate information on research-based accommodations to states. We also recommended that Education determine what additional technical assistance states need to implement valid and reliable academic assessments for these students and provide such assistance. Further, we recommended that Education publish additional guidance with more specific information on the requirements for assessing English language proficiency and tracking student progress in learning English. Finally, we recommended that Education explore ways to provide states with additional flexibility in terms of holding states accountable for students with limited English proficiency. Education agreed with our first three recommendations and has taken a number of steps to address them. In recognition of the challenges associated with assessing students with limited English proficiency and in response to GAO’s report, Education initiated the LEP (Limited English Proficient) Partnership in July 2006. Under the partnership, Education has pledged to provide technical assistance and support to states in the development of assessment options for states to use in addressing the needs of their diverse student populations. Education’s partners in this effort include the National Council of LaRaza, Mexican American Legal Defense and Educational Fund, Council of Chief State School Officers, Comprehensive Center on Assessment and Accountability, and the National Center on English Language Acquisition. All states have been invited to participate in this effort. The partnership held its first meeting in August 2006. In October 2006, officials from all the states came together to discuss areas for which they need additional technical assistance. As a result of these meetings, Education is supporting a variety of technical assistance projects, including the development of a framework on English language proficiency standards and assessments, the development of guides for developing native language and simplified assessments, and the development of a handbook on appropriate accommodations for students with limited English proficiency. Education officials told us that they are planning the next partnership meeting for the summer of 2007 and expect to have several of these resources available at that time. Education did not explicitly agree or disagree with our recommendation to explore additional options for state flexibility. Instead, the agency commented that it has explored and already provided various types of flexibility regarding the inclusion of students with limited English proficiency in accountability systems. However, in January 2007, Education issued a blueprint for strengthening NCLBA, which calls for greater use of growth models and the recognition within state accountability systems of schools that make significant progress in moving students toward English proficiency. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other members of the subcommittee may have. For further information regarding this testimony, please contact me at (202) 512- 7215. Individuals making key contributions to this testimony include Harriet Ganson, Bryon Gordon, Shannon Groff, Krista Loose, Michelle St. Pierre, Sheranda Campbell, and Nagla’a El Hodiri. No Child Left Behind Act: Education’s Data Improvement Efforts Could Strengthen the Basis for Distributing Title III Funds. GAO-07-140. Washington, D.C.: December 7, 2006. No Child Left Behind Act: Education Actions Needed to Improve Local Implementation and State Evaluation of Supplemental Educational Services. GAO-06-758. Washington, D.C.: August 4, 2006. No Child Left Behind Act: Assistance from Education Could Help States Better Measure Progress of Students with Limited English Proficiency. GAO-06-815. Washington, D.C.: July 26, 2006. No Child Left Behind Act: States Face Challenges Measuring Academic Growth That Education’s Initiatives May Help Address. GAO-06-661. Washington, D.C.: July 17, 2006. No Child Left Behind Act: Improved Accessibility to Education’s Information Could Help States Further Implement Teacher Qualification Requirements. GAO-06-25. Washington, D.C.: November 21, 2005. No Child Left Behind Act: Education Could Do More to Help States Better Define Graduation Rates and Improve Knowledge about Intervention Strategies. GAO-05-879. Washington, D.C.: September 20, 2005. No Child Left Behind Act: Most Students with Disabilities Participated in Statewide Assessments, but Inclusion Options Could Be Improved. GAO-05-618. Washington, D.C.: July 20, 2005. Head Start: Further Development Could Allow Results of New Test to Be Used for Decision Making. GAO-05-343. Washington, D.C.: May 17, 2005. No Child Left Behind Act: Education Needs to Provide Additional Technical Assistance and Conduct Implementation Studies for School Choice Provision. GAO-05-7. Washington, D.C.: December 10, 2004. No Child Left Behind Act: Improvements Needed in Education’s Process for Tracking States’ Implementation of Key Provisions. GAO-04-734. Washington, D.C.: September 30, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The No Child Left Behind Act of 2001 (NCLBA) focused attention on the academic achievement of more than 5 million students with limited English proficiency. Obtaining valid test results for these students is challenging, given their language barriers. This testimony describes (1) the extent to which these students are meeting annual academic progress goals, (2) what states have done to ensure the validity of their academic assessments, (3) what states are doing to ensure the validity of their English language proficiency assessments, and (4) how the U.S. Department of Education (Education) is supporting states' efforts to meet NCLBA's assessment requirements for these students. This testimony is based on a July 2006 report (GAO-06-815). To collect the information for this report, we convened a group of experts and studied five states (California, Nebraska, New York, North Carolina, and Texas). We also conducted a state survey and reviewed state and Education documents. In nearly two-thirds of 48 states for which we obtained data, students with limited English proficiency did not meet state proficiency goals for language arts or mathematics in school year 2003-2004. Further, in most states, these students generally did not perform as well as other student groups on state mathematics tests for elementary students. Officials in our five study states reported taking steps to follow generally accepted test development procedures to ensure the validity and reliability of academic tests for these students. However, our group of experts expressed concerns about whether all states are assessing these students in a valid manner, noting that some states lack technical expertise. Further, Education's completed peer reviews of assessments in 38 states found that 25 states did not provide adequate evidence of their validity or reliability. To improve the validity of these test results, most states offer accommodations, such as a bilingual dictionary. However, our experts reported that research is lacking on what accommodations are effective in mitigating language barriers. Several states used native language or alternate assessments for students with limited English proficiency, but these tests are costly to develop and are not appropriate for all students. Many states implemented new English language proficiency assessments in 2006 to meet NCLBA requirements, and, as a result, complete information on their validity and reliability is not yet available. In 2006, 22 states used tests developed by one of four state consortia. Officials in our study states reported taking steps to ensure the validity of these tests. However, a 2005 Education-funded review of 17 English language proficiency tests found insufficient documentation of their validity. Education has offered a variety of technical assistance to help states assess students with limited English proficiency. However, Education has issued little written guidance to states on developing English language proficiency tests. Officials in about one-third of the 33 states we contacted told us they wanted more guidance about how to develop tests that meet NCLBA requirements. Education has offered states some flexibility in how they assess students with limited English proficiency, but officials in our study states told us that additional flexibility is needed to ensure that progress measures appropriately track the academic progress of these students. Since our report was published, Education has initiated a partnership with the states and other organizations to support the development of valid assessment options for students with limited English proficiency.
The Housing and Community Development Act of 1974 created the CDBG program to develop viable urban communities by providing decent housing and a suitable living environment and by expanding economic opportunities, principally for low- and moderate-income persons. Program funds can be used for housing, economic development, neighborhood revitalization, and other community development activities. After funds are set aside for special statutory purposes, the annual CDBG appropriation is allocated to entitlement communities and states. Entitlement communities are principal cities of metropolitan statistical areas, other metropolitan cities with populations of at least 50,000, and qualified urban counties with a population of 200,000 or more (excluding the populations of entitlement cities). Entitlement communities may carry out activities directly or may award funds to subrecipients to carry out agreed-upon activities. States distribute CDBG funds to nonentitlement localities not qualified as entitlement communities. In fiscal year 2012, Congress appropriated about $3 billion for the CDBG program, $60 million of which was set aside for Native American tribes. The remainder (about $2.9 billion) was allocated to entitlement communities, states, and insular areas. Grantees can use CDBG funds for 28 eligible activities. For reporting purposes, HUD classifies the activities into eight broad categories— acquisition, administration and planning, economic development, housing, public improvements, public services, repayments of section 108 loans, and “other” (including capacity building for nonprofit organizations and assistance to institutions of higher learning). Some of the activities that can be funded, such as loans for housing rehabilitation, generate program income for grantees that must be used to fund additional activities. There are statutory limitations on the amounts that grantees may use in two specific areas. According to provisions in annual appropriations laws, grantees may use up to 20 percent of their annual grants plus program income on administration. Grantees may also use only up to 15 percent of their annual grant plus program income on public service activities such as job training and crime prevention activities. Entitlement communities comply with these requirements by limiting the amount of funds they obligate for these activities during the program year, while states limit the amount they spend on these activities over the life of the grant. HUD has provided grantees with a variety of training classes, written guidance, and technical assistance to help them determine which activities are considered administrative and demonstrate compliance with the 20 percent administrative limit. For example, HUD has prepared a training manual that includes guidance on eligible administrative activities and instructions for showing compliance with the administrative limit. HUD has also developed video training modules on components of the CDBG program, including administrative planning and financial management. Within these materials, HUD has also provided links to relevant OMB guidance on determining program administrative costs, cost allocation, and indirect costs as well as links to relevant HUD regulations. The administrative activities subject to the 20 percent limit are separate from “activity delivery costs” that are related to carrying out specific CDBG activities. Activity delivery costs, such as staff and overhead costs linked directly to an eligible CDBG activity (e.g., economic development, housing rehabilitation), are not considered administration and are therefore not subject to the 20 percent limit. For example, if a grantee’s employees underwrite economic development loans that will be made with CDBG funds, the portion of their salaries spent on this function can be treated as costs of carrying out the economic development activity. Other costs that are considered activity delivery costs include the costs of printing brochures advertising the availability of housing rehabilitation loan funds and staff costs of housing rehabilitation specialists performing work write-ups and inspecting completed construction work. There is no statutory limit on the percentage of CDBG funds that may be used for eligible activity delivery costs, but they must be necessary and reasonable costs. Grantees must submit to HUD a strategic plan that addresses the housing, homeless, and community development needs in their jurisdictions every 3 to 5 years. This plan, known as the consolidated plan, covers CDBG and three other formula grants that the grantee may receive—the HOME Investment Partnerships (HOME) Program, the Emergency Solutions Grants (ESG) Program, and the Housing Opportunities for Persons with AIDS (HOPWA) Program. Annually, entitlement communities must submit an action plan that identifies the activities they plan to undertake to meet the objectives in their strategic plans. In their annual action plans, states describe their method for distributing funds. At the end of each program year, grantees must submit to HUD an annual performance report detailing progress they have made in meeting the goals and objectives outlined in their strategic and action plans and their compliance with statutory limits. HUD staff use detailed checklists to review recipients’ strategic and annual action plans and annual performance reports. HUD’s Office of Community Planning and Development (CPD) administers the CDBG program through program offices at HUD headquarters and field offices located throughout the United States. Among other strategies, HUD field staff are to use data and reports generated through IDIS to monitor CDBG funds. Implemented in fiscal year 1996, IDIS is a management information system that consolidates planning and reporting processes across HUD’s four formula grant programs. Grantees are to use this system to enter information on their plans, establish projects and activities to draw down funds, and report accomplishments. Although it contains data on reported expenditures, IDIS is a reporting system and not an accounting system. Grantees are expected to use their own accounting systems in addition to IDIS to ensure proper management of funds. Information that grantees enter in IDIS is used to generate financial summary reports, which contain information on the CDBG funds available and expenditures incurred, including the percentage of funds used for low- and moderate-income persons, public services, and administration. As previously noted, grantees may use no more than 20 percent of the CDBG grant and program income received for a range of activities related to program administration. Examples of eligible administrative activities include  managing, overseeing, and coordinating the CDBG program;  providing local officials and citizens with information about the CDBG conducting fair housing activities;  preparing reports and other HUD-required documents;  preparing comprehensive plans;  preparing community development plans;  developing functional plans for housing, land use, urban environmental design, and economic development; and  providing policy planning. IDIS has 10 broad categories, or matrix codes, for recording administrative expenses (see table 1). For example, grantees are to use the general program administration code to report overall program administration, including salaries, wages, and related costs of grantee staff or others engaged in program management, monitoring, and evaluation. IDIS expenditure data show that for each of the last 11 fiscal years, grantees have recorded more than 80 percent of CDBG administrative expenses under the general program administration matrix code, which captures salaries among other things (see fig. 1). Our analysis showed that the second most used matrix code (ranging from about 11 to 14 percent from fiscal years 2001 to 2011) was the planning code, which captures program planning activity costs such as the development of grantees’ consolidated plans. The amount charged under the remaining matrix codes ranged from about 5 percent to 9 percent. The matrix code used most often, general program administration, is broad. HUD’s guidance indicates that this category is to include salaries but allows other general expenses to be charged to it. Therefore, using HUD’s matrix code data to determine how much of the expenses were for salaries is not possible. However, officials from all 12 grantees we interviewed told us that they primarily used their CDBG administrative funds to pay the salaries of employees who oversaw and managed the grant. They also noted that they used some funds to pay for supplies, training, travel, and planning costs. As overall CDBG allocations have decreased, the funding available to states, entitlement communities, and insular areas for administrative expenses has also fallen. Specifically, total CDBG funding for these grantees decreased from about $4.4 billion in fiscal year 2001 to about $2.9 billion in fiscal year 2012 (a reduction of about 33 percent). As a result, the amount of funding available to grantees from CDBG grants for administrative costs decreased by 33 percent in nominal dollars from fiscal years 2001 through 2012. Once adjusted for inflation, the funds available for administrative costs decreased by 47 percent. Specifically, the aggregate amount available to CDBG grantees for administrative costs for fiscal year 2012 was $590 million, down from approximately $881 million in fiscal year 2001 nominal dollars (see fig. 2). This amount represents a reduction of about $292 million in nominal dollars, or $532 million in fiscal year 2012 constant dollars. The 12 grantees we interviewed reported that they had taken various steps to address this reduction in available funding, ranging from reducing staff to improving record keeping to better enable them to allocate expenses. Reducing staff. Three grantees we interviewed told us they had reduced the number of staff that administered the CDBG program. For example, officials from one grantee said that the organization had placed a moratorium on hiring CDBG staff, and representatives from another said that the organization had reduced its staff by half and hired a consultant to administer the program. Leveraging supplemental funding sources. Officials from two grantees told us they had begun paying the salaries of existing staff with non- CDBG funding. For example, one grantee used its local funds to supplement the salaries of CDBG staff. Limiting the number and types of projects. Three grantees told us that they had limited the number and types of projects they administered to address the reduction in funds. For example, officials from a grantee we contacted told us that they had revisited the consolidated plan to determine which CDBG activities the city could continue to fund and had determined that it could no longer administer its housing rehabilitation program. Similarly, an official from another grantee told us that the city had reduced the number of CDBG subrecipients it administered by half and developed a strategy of selecting less administratively burdensome grants. Some grantees also said that they selected projects based on the priorities and needs of the communities they served and not on the need to reduce administrative costs, but six indicated that the administrative limit did affect the type of projects they chose. For example, one grantee told us that it did not fund economic development projects under CDBG because it did not have the capacity to administer them. Similarly, officials from another grantee told us that funding a planning grant could depend on whether the grantee was close to the limit. In addition, an official from another grantee told us that the staff tried to manage many of their projects through larger subrecipients as a way to mitigate administrative expenses. Sharing grant administration. An official from a grantee we interviewed told us that a group of grantees had decided to share CDBG administrative costs. According to the grantee’s website, the group coordinates activities conducted by the six participating entitlement communities, which are located in the same county. Group members have jointly prepared the 5-year consolidated plan, analyzed impediments to fair housing choice, and coordinated and collaborated in the CDBG application process and monitoring practices. According to the official, joining this group has helped participating entitlement communities save millions of dollars and made program administration more efficient. Improving record keeping. Officials from one grantee told us that they had taken steps to improve record keeping so that they could link more administrative costs to specific projects (i.e., claim more activity delivery costs) as a way of reducing administrative costs. Specifically, officials said that historically they had shared the 20 percent of funds available for administration with their subrecipients. However, in 2011 the grantee decided to end this practice because of the reduction in available administrative funding. As part of this decision, the grantee evaluated the costs each subrecipient was charging as administrative and determined that they were more aligned with the definition of an activity delivery cost. The grantee instructed the subrecipients to report these costs as activity delivery costs. While the grantees have taken a number of steps to address declining funds available for administering CDBG as the program has gotten smaller, the vast majority of the grantees we spoke with told us that a reduction in the statutory limit on using funds for administration would have a significant impact on their ability to administer and oversee the projects they implemented with CDBG funding. Nine of the grantees we contacted said that lowering the limit would require them to reduce the number of CDBG staff or limit the number and type of projects they administered. For example, officials from a grantee said that they would be required to further reduce their CDBG staff and that the reductions would have an impact on their ability to manage and monitor the program. In addition, officials from other grantees said that a reduction might require them to cut back on their planning activities or undertake relatively large construction and infrastructure projects that carried a smaller administrative burden than projects such as mortgage assistance. Limiting the type or number of projects grantees administer could reduce some of the administrative burden of the program, but some grantees and national organizations representing CDBG grantees that we interviewed pointed out that certain fixed costs were associated with administering the CDBG program. These include preparing the required plans and reports and complying with other reporting requirements. Finally, officials from three grantees told us that grantees receiving relatively small CDBG grants might need to evaluate their ability to continue administering the CDBG program if the funds that could be used for administration were further limited. Officials explained that administering a small CDBG grant might not be cost-effective because of the program’s complex reporting requirements. Incomplete data, technical limitations of IDIS, and reliance on field office oversight have meant that HUD has not routinely assessed compliance with the limit on the use of funds for administration across the program. A recent congressional request for HUD to provide information on compliance across the program resulted in a labor-intensive process that we determined produced unreliable results. Internal control standards state that information should be recorded and communicated in a form and within a time frame that enables management and others to carry out their internal control and other responsibilities. Specifically, internal control guidance states that operating information should be provided to managers so that they may determine whether their programs comply with applicable laws and regulations. The guidance also states that information should be presented appropriately and available on a timely basis to allow for effective monitoring and prompt action if shortcomings are found. Further, as noted previously, there has been congressional interest in reducing the limit on administration in order to direct limited funds to worthwhile community development activities and to reduce instances of waste, fraud, and abuse among CDBG grantees. We found that HUD’s process for assessing compliance with the administrative limit for CDBG funds did not allow for effective monitoring across the program or for providing data that would inform Congress about the efficient use of these funds. Annually, each grantee generates a financial summary report that contains information on the CDBG funds available and expenditures incurred, including the percentage of funds used for administration. HUD relies on this report, which shows all the information needed to complete the calculation as well as the final percentage of funds the grantee has obligated or spent on administration, to determine if grantees are within the statutory limit. As noted previously, entitlement communities are considered to be in compliance if their total obligations for administration during the most recent program year are no more than 20 percent of the grant and program income for that year. Meanwhile, states are in compliance if their total expenditures for administration are no more than 20 percent of each grant and program income. Grantees generate the financial summary report using data from IDIS and make any needed adjustments based on data in their internal accounting systems. The information on expenditures and program income is available in IDIS, because grantees use it to request funds for CDBG activities they administer and report their program income. However, because compliance with the administrative limit for entitlement communities is assessed based on obligations and not just on expenditures, they may need to use additional information from their own internal accounting systems to report unliquidated obligations. For example, one entitlement community we spoke with had to enter unliquidated obligations when a planning project ultimately took less staff time than anticipated and not all of the funds obligated to that activity were disbursed during that program year. According to the grantee, the unused funds eventually were reallocated to other eligible activities. In addition, according to several grantees we spoke with, grantees may need to make other adjustments for a number of reasons. For example, grantees may need to reconcile differences between data in their internal accounting systems and information in IDIS, account for program income or administrative expenditures that were not assigned to the correct program year in IDIS or entered after the program year was complete, or correct other errors in IDIS. Table 2 provides an example of how compliance is determined for entitlement communities. In this hypothetical example, the grantee is in compliance with the 20 percent limit. While HUD can use a financial summary report to determine an individual grantee’s compliance with the administrative limit, two factors limit HUD’s ability to use IDIS to determine compliance across the program. First, as noted previously, the financial summary reports used to assess compliance with the limit allow for certain adjustments, but grantees are not required to save these adjustments in IDIS. In July 2006, we reported that adjustments entered in financial summary reports were not saved in IDIS. We recommended that HUD centrally maintain this information, and in 2010 HUD made changes to IDIS to allow grantees to save the adjustments in the system but did not make the “save” function automatic. Instead, grantees must manually select a “save” option in order to save their changes in the system. If they do not choose this option, the information is not saved. In addition, HUD officials told us that grantees were not required to complete their financial summary reports in IDIS. In such cases, the system would not reflect any adjustments grantees made to the amount of funds they used for administration. While HUD officials do receive copies of the financial summary reports from all grantees, they do not have electronic access to the adjustments if grantees did not save the information. Difficulties in determining compliance across the program that were associated with saving adjustments in IDIS are further exacerbated when corrections have to be made to financial summary reports. When reviewing the reports, HUD officials have found that grantees have sometimes made reporting errors. These include miscategorizing expenditures for administration, failing to report program income, and failing to enter unliquidated obligations or other adjustments. HUD allows grantees to revise the information in IDIS or on their financial summary reports if such errors occur. However, if a grantee makes any changes to the report and does not save the adjustments in IDIS, the updated report that HUD officials download will not reflect the actual percentage of funds used for administration. Second, HUD does not maintain the necessary information in an easily accessible format and therefore has no simple way to monitor or report on compliance with the administrative limit across the program. Currently, each financial summary report that a grantee submits is saved as a separate document in IDIS. The adjustments made to calculate compliance are not separate data elements in IDIS that can be extracted and analyzed. Rather, the information is contained only in the individual reports. As a result, HUD officials must review each individual financial summary report to determine grantees’ compliance with the limit. To report on compliance with the limit across the program, they must compile all of these reports and manually create a summary. HUD undertook such an exercise in 2011 in response to a congressional request to report on compliance across the program with the statutory limit on funds used for administration. HUD officials told us that staff at HUD’s field offices went through a laborious process of compiling a database of grantees’ financial summary reports for a single program year (2010) and manually entering the percentage that each grantee used for administration. For the purposes of this report, we also requested a summary of grantees’ compliance with the statutory limit and were provided with this same database. In addition to the fact that the exercise covered only one year, we found the results to be unreliable. During our review of the database that HUD prepared for program year 2010 and the financial summary reports that supported it, our analysis revealed a number of data entry errors. Additionally, information was either missing or could not be verified based on the source reports. While a number of these errors were resolved during the course of our communications with HUD, we determined the database was unreliable for the purpose of describing grantees’ compliance in program year 2010. Instead, we used the individual financial summary reports to create our own summary showing the percentages of funds used for administration that grantees reported. Our analysis showed that in program year 2010 less than 2 percent of entitlement communities exceeded the limit on administration. Almost 60 percent obligated between 15 percent and 20 percent of their funds for administration (see fig. 3). The financial summary reports HUD provided for states were for program year 2010 or 2011. We could not use these reports to describe state grantees’ compliance with the administrative limit because, as mentioned earlier, each state’s compliance is based on the percentage of each grant spent on administration rather than the percentage of each program year’s obligations, as is the case for entitlement communities. HUD officials said that they determined compliance with the limit when grants were fully spent by using data on expenditures in IDIS to generate a financial summary report. Although they did not provide specifics, HUD officials told us that technical changes would have to be made to IDIS in order to give it the capability to generate reports on compliance with the administrative limit across the program. They also told us that any changes would need to be approved by the IDIS Change Control Board and then assigned to contractors. They added that such changes would not be possible for at least 12 to 18 months because a number of IDIS updates were already planned for 2013. Further, they noted that the changes were unnecessary because the agency’s practice was to rely on field office staff to monitor and assess, at least annually, individual grantees’ compliance with the administrative limit rather than to assess compliance across the program. Without information on annual compliance across the program, however, HUD lacks the ability to monitor the grantees across the program. As noted previously, its recent attempt to do so was labor intensive and yielded unreliable results. In addition, because our analysis showed that the majority of entitlement communities in program year 2010 obligated between 15 percent and 20 percent of their funds for administration, HUD’s lack of information on compliance across the program limits its ability to determine how many may be affected by more stringent requirements. A recent proposal by a House appropriations subcommittee to reduce the percentage of CDBG program funds that may be used for administration below the traditional 20 percent has highlighted the need for HUD to analyze and report on grantees’ compliance with the current limit across the program. While grantees are responsible for complying with the limit, internal control guidance states that information needed to assess compliance with laws and regulations should be timely and reported in a manner that allows for effective monitoring. However, HUD faces difficulty in routinely reporting compliance across the program. Because of limitations in IDIS, HUD’s recent attempt to report on grantee compliance across the program for a single year required a labor-intensive process that ultimately produced data that were not reliable. Specifically, grantees are able to save in IDIS certain information needed to determine the amount they used for administration, but they are not required to do so. As a result, these data may not be readily available to HUD officials. Furthermore, system limitations prevent HUD officials from extracting and analyzing data contained within grantees’ financial summary reports that would allow HUD to assess and report on compliance across the program. Without this information, HUD cannot provide timely assurance that recipients are adhering to the limit or the number that are close to the limit. Additionally, a standard report listing the percentage that each grantee spent on administration would be a useful evaluative tool. For example, it would help determine the potential impact of any change to the statutory limit on administrative funds. In order to demonstrate compliance across the program with the statutory limit on funds that can be used for administration, the Secretary of HUD should direct the Assistant Secretary for Community Planning and Development to develop a process for generating annual reports on compliance across the program, including making any requisite changes to IDIS to better ensure that the agency has complete and analyzable data to support such reporting. We provided a copy of this draft report to HUD for its review and comment. HUD provided written comments on the draft, which are summarized below and appear in their entirety in appendix II. HUD did not specifically state whether it agreed or disagreed with our recommendation but did provide comments on some of our findings and conclusions. First, HUD responded to our conclusion that it lacks information across the program on grantees’ compliance with the 20 percent limit on administration. HUD stated that our conclusions incorrectly implied that the agency was subject to a statutory or regulatory requirement to determine cumulative grantee obligations relative to the limit on a nationwide or programwide basis. HUD noted that determining a nationwide statistic on the percentage of funds obligated for administration would reveal nothing about individual grantee compliance. However, our recommendation would not require HUD to determine cumulative obligations or a nationwide statistic on the percentage of funds obligated. Rather, our recommendation would require that HUD generate annually the same type of compliance report it prepared in 2011 in response to a congressional request. That report included the percentage that each CDBG grantee used for administration across the program. Rather than being statutory or regulatory, our conclusion speaks to general management of the program. Specifically, internal control guidance states that information needed to assess compliance with laws and regulations should be timely and reported in a manner that allows for effective monitoring and prompt action if shortcomings are found. Such an approach would also be consistent with congressional concerns about the efficient use of CDBG funds. As a result, we clarified that we were recommending that HUD report on compliance across the program. We also added language to further stress the importance of such reporting. Second, HUD said that it was unclear why we concluded that the database the agency compiled to report on compliance with the limit on administration in program year 2010 was unreliable. HUD pointed out that we used the financial summary reports contained within the database to prepare our own analysis. Our draft report did note that we had determined that these financial summary reports were reliable for the purposes of describing grantees’ compliance with the administrative limit. What we found to be unreliable was HUD’s analysis of them. For example, as noted in the scope and methodology appendix, we found a number of data entry errors in the database field that was to indicate the percentage of funds each grantee used for administration. We also were unable to verify some of the percentages that were based on the source reports. While a number of these errors were resolved during the course of our communication with HUD, we decided that it would be more reliable to use the financial summary reports to create our own summary analysis of the percentages of funds that grantees used for administration in program year 2010. We made no change in response to this comment. Third, HUD responded to our conclusion that almost 60 percent of entitlement communities obligated an amount that was at or close to the 20 percent limit. HUD commented that this conclusion implied that if HUD had more accurate data, it would find that some grantees were actually exceeding the 20 percent limit. Our draft report made no such linkage; instead, it noted that because our analysis showed that the majority of entitlement communities in program year 2010 were at or near the administrative limit (between 15 and 20 percent), HUD’s lack of information on compliance across the program limits any type of analysis to determine how many may be affected by more stringent requirements. HUD also observed that 15 percent was not “just under” the 20 percent limit. In response to this comment, we revised how we presented these data. Finally, HUD responded to a statement in the draft report concluding that HUD’s process for assessing compliance with the administrative limit did not allow for effective monitoring across the program. HUD provided additional information about the types of reviews that its field office staff conduct annually of state grantees’ compliance with the limit on administration. However, our point was related not to HUD’s assessment of state grantees’ compliance but to our conclusion that HUD did not routinely report on compliance across the program. As our draft report noted, HUD can determine an individual grantee’s compliance with the administrative limit but has to review each individual grantee’s report and manually create a summary of compliance across the program. We made no change in response to this comment. HUD also provided a technical comment, which was incorporated into the report as appropriate. We are sending copies of this report to appropriate congressional committees and the Secretary of Housing and Urban Development. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-8678 or by e-mail at shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objectives of this report were to describe (1) the types of activities that are subject to the 20 percent limit on administration and the ways in which grantees have used their administrative funds, (2) trends in funds available to grantees for Community Development Block Grant (CDBG) administration and the impact of these trends on grantee spending, and (3) the Department of Housing and Urban Development’s (HUD) reporting on grantee compliance with the limit. To identify and describe the types of activities subject to the 20 percent limit on funds that can be used for administration, we examined and summarized relevant statutes, HUD regulations, and Office of Management and Budget guidance. We also interviewed HUD officials and a purposive, nonrandom sample of 12 CDBG grantees: the cities of Albany, Georgia; Antioch, California; Bowling Green, Kentucky; Houston, Texas; Newark, New Jersey; Oshkosh, Wisconsin; and Tustin, California; the counties of Northampton in Pennsylvania and Oakland in Michigan; and the states of Alaska, North Carolina, and Ohio. We used a database HUD field staff compiled that included the percentage that each grantee used for administration for program year 2010 as a starting point for selecting these grantees and used a two-step purposeful sampling procedure to select grantees with a range of experiences. The entitlement communities list contained 1,134 grantees (including Washington D.C. and territories) and the state list contained 49 states and Puerto Rico. In the first step, we selected 47 grantees. We intentionally chose entitlement communities based on the percentages obligated for administration (less than or equal to 15 percent, 16 percent to 20 percent, and greater than 20 percent) and region of the country (Midwest, Northeast, South, and West). From the entitlement community list, we selected three grantees within each of the 12 subgroups created by the combination of the four regions and three administrative spending levels. Applying these criteria resulted in 35 entitlement communities. From the state list, we randomly selected three grantees from each region for a total of 12 states. In the second step, we selected 12 grantees from the 47 entitlement communities and states initially selected. Specifically, for the 47 entitlement communities and states, we determined their fiscal year 2012 CDBG allocations and categorized them as above or below the median allocation of entitlement communities or states, based on the type of grantee. We then selected 12 grantees based on diversity in allocation amount, percentage used for administration, and region of the country. Because we used a nongeneralizable, purposive sample to select grantees, our findings cannot be used to make inferences about other grantees not in the sample. However, we determined that the selection of these grantees was appropriate for gaining an understanding of grantees’ experiences with the administrative limit and that the selection would generate valid and reliable evidence to support our work. We found HUD’s database reliable for selecting our sample; however, as described later in this appendix, we did not find the database reliable for the purpose of describing grantees’ compliance in program year 2010. To determine how grantees have used their administrative funds, we interviewed HUD officials, reviewed the selected grantees’ annual reports to HUD for 2010 or 2011, and reviewed and summarized expenditure data from HUD’s Integrated Disbursement & Information System (IDIS). Specifically, we analyzed the expenditures reported under broad categories, or matrix codes, used to record administrative expenses to determine which codes were the most often used to report the grantees’ administrative expenditures from fiscal years 2001 through 2011. We assessed the reliability of these data by performing basic electronic testing of relevant data elements, reviewing HUD’s data dictionaries, and interviewing HUD officials knowledgeable about the data. We determined that these data were sufficiently reliable for analyzing different administrative spending categories. We also reviewed HUD’s guidance and training manuals to determine the extent to which HUD was providing guidance on funds that can be used for administration. To determine the availability of CDBG funds for administrative expenses, we interviewed HUD officials and analyzed CDBG allocation data. Specifically, we used the programwide CDBG allocation amount to calculate the aggregate amount available to grantees for administrative expenses under the 20 percent limit for each year from 2001 through 2012. To assess the reliability of these data, we reviewed information about the data and compared selected allocation amounts with other sources. We determined that the data were sufficiently reliable for estimating the amount of CDBG funding that could be available for administrative expenses. We also interviewed the selected grantees, HUD officials, and representatives from national organizations representing CDBG grantees to obtain their views on whether the 20 percent limit had affected the types of activities grantees chose to fund. The national organizations we interviewed were the Council of State Community Development Agencies, National Association for County Community and Economic Development, and National Community Development Association. To determine HUD’s ability to report on grantee compliance with the administrative limit, we interviewed HUD officials and the selected grantees about how HUD verifies and reports grantees’ compliance. We then compared HUD’s reporting on grantee compliance with internal control standards for the federal government. In order to describe grantees’ compliance in program year 2010, we first attempted to use the database HUD had compiled that included the grantees’ financial summary reports—showing the calculations used to determine compliance with the administrative limit—and the percentage that each grantee used for administration in program year 2010. According to HUD officials, the database was compiled by field office staff manually in response to a congressional request. The staff entered the percentage that each grantee used for administration, as reported in the grantee’s financial summary report, and then attached the report to the database. In order to assess the reliability of the database, we compared the percentages entered in the database to the percentages in the attached financial summary reports. We found a number of data entry errors and were unable to verify some of the percentages based on the source reports. While a number of these errors were resolved during the course of our communication with HUD, we decided it would be more reliable to use the attached financial summary reports to create our own summary of the percentages of funds that grantees used for administration in program year 2010. As previously discussed, the reports from the database were compiled by HUD field office staff. For some individual grantees, we used updated financial summary reports and information provided to us by HUD or grantees. Of the 1,134 entitlement communities included in HUD’s database, we did not report on the percentage obligated for administration for 44 entitlement communities. We excluded 36 entitlement communities because the reports HUD provided were not for program year 2010, 7 entitlement communities because HUD did not provide their reports, and 1 entitlement community because HUD provided two different versions of the report. We determined that the financial summary reports we used in our analysis were reliable for the purposes of describing grantees’ compliance with the administrative limit by reviewing documents describing how the reports were prepared and interviewing HUD officials about their oversight of the reports. As noted previously, we assessed IDIS data on administrative expenses, which are included in the financial summary reports, and determined they were reliable for our purposes. We conducted this performance audit from July 2012 to March 2013 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, A. Paige Smith, Assistant Director; Emily Chalmers; Pamela Davidson; Anar Ladhani; John McGrail; Josephine Perez; and Deena Richart made key contributions to this report.
CDBG is the federal government's principal community development program. In fiscal year 2012, Congress provided CDBG with approximately $3 billion for activities such as housing, economic development, and neighborhood revitalization. While a provision reducing the amount grantees can use for administration was considered but not enacted, GAO was required to examine grantees' use of administrative funds up to the allowed 20 percent of program funds. This report discusses (1) the types of activities subject to the 20 percent limit and grantees' use of their administrative funds, (2) trends in funds available to grantees for CDBG administration and the impact of these trends on grantees' administrative spending, and (3) HUD's reporting on compliance with the limit. GAO analyzed HUD data and program information, reviewed federal internal control standards, and interviewed HUD headquarters and field office staff and organizations representing grantees. GAO also interviewed 12 grantees selected based on grant size and location, among other things, to obtain a range of experiences. The annual appropriation for the Community Development Block Grant (CDBG) program allows grantees to use up to 20 percent of program funds for planning, management, and administration (collectively referred to as "administration"). Specifically, grantees may use these funds for a range of activities, including general management, oversight, and coordination; fair housing activities; preparing community development plans; and policy planning. The Department of Housing and Urban Development (HUD) uses broad categories, such as "general program administration" and "fair housing activities," to record grantees' administrative expenses. According to HUD's data for the last decade, grantees primarily recorded their administrative expenses under the general program administration category, which includes staff salaries. Grantees GAO interviewed added that they also used administrative funds to cover general administrative costs such as supplies, training, and travel. The amount available to grantees for administrative costs decreased from 2001 to 2012 by 47 percent, or about $532 million in 2012 constant dollars, as the amount of overall CDBG funding declined. Grantees GAO interviewed reported taking various steps to address this decline, including reducing the number of CDBG staff and changing the types of projects they administered. For example, one grantee determined that it could no longer administer its housing rehabilitation program. However, the vast majority of the grantees that GAO interviewed said that reducing the statutory limit on administration would negatively impact their ability to administer and oversee CDBG-funded projects. HUD does not routinely determine and report on compliance with the administrative limit across the program. HUD reviews financial summary reports--which contain information grantees enter in HUD's Integrated Disbursement & Information System (IDIS) and their own internal accounting systems--to determine individual grantees' compliance. Internal control guidance states that information needed to assess compliance with laws and regulations should be timely and reported in a manner that allows for effective monitoring. However, HUD managers cannot use IDIS to generate summaries of compliance with the administrative limit across the program. First, grantees are not required to save information from their own systems in IDIS. Second, when such data are saved, the information is not stored as separate data elements that can be extracted and analyzed. Rather, HUD officials must download each grantee's report and manually create a summary of compliance across the program. HUD's most recent attempt to assemble this information for a single year required a labor-intensive process that ultimately produced unreliable data. Without making changes to IDIS that allow for summaries of compliance across the program, HUD lacks the ability to monitor grantees' compliance across the program. Further, GAO's analysis of financial summary reports for program year 2010 (the most recent year available) showed that 60 percent of entitlement communities (eligible cities and counties) obligated between 15 percent and 20 percent of their funds for administration. Given these statistics, HUD could benefit from having the information it needs to determine how many grantees would be affected by reducing the administrative limit. GAO recommends that HUD develop a process for annually reporting on compliance across the program with the statutory limit on the use of funds for administration. In its response, HUD noted that it was not required to assess cumulative compliance with the limit. As discussed in the report, an annual report that summarizes individual grantee compliance is essential to effective monitoring.
OMB requires agencies to submit data on R&D programs as part of their annual budget submissions. Specifically, agencies are to provide data on investments for basic research, applied research, development, R&D facilities construction, and major equipment for R&D. OMB provides one definition of R&D that all federal agencies are to use to prepare budget estimates (see app. II for a list of federal R&D definitions). According to OMB, R&D activities comprise creative work undertaken on a systematic basis in order to increase the stock of knowledge, including knowledge of man, culture, and society, and the use of this stock of knowledge to devise new applications. R&D is further broken down into the following three stages, as defined by OMB. Basic research is a systematic study directed toward a fuller knowledge or understanding of the fundamental aspects of phenomena and of observable facts without specific applications towards processes or products in mind. Applied research is a systematic study to gain knowledge or understanding to determine the means by which a recognized and specific need may be met. Development is a systematic application of knowledge or understanding, directed toward the production of useful materials, devices, and systems or methods, including design, development, and improvement of prototypes and new processes to meet specific requirements. There are several mechanisms by which agencies such as DHS are required to report their investments in R&D, and investments can be described in the following ways: Budget authority is the legal authorization to obligate funds. Obligations are binding agreements for the government to make a payment (outlay) for goods and services ordered or received. Outlays are payments to liquidate obligations and represent the amount actually expended. For R&D activities, OMB directs agencies to submit information on budget authority and outlays for each year. Because the executive branch and Congress generally make budget decisions in terms of budget authority, budget authority can provide insight into relative priorities within the annual budget process and changes in budget policies. Agencies report obligation data to OMB by object classification. Object classes are categories that present obligations for items or services purchased according to their initial purpose. For R&D-related obligations, OMB has a separate category for R&D contracts (object class 25.5). OMB also includes some advisory and assistance services for R&D in a separate object class category (object class 25.1). The other agencies conducting homeland security R&D included the Departments of Agriculture, Commerce, Defense, Energy, and Health and Human Services; the National Aeronautical and Space Administration; the Environmental Protection Agency; and the National Science Foundation. being the largest R&D entity. DHS reported $512 million in budget authority and $752 million in outlays for R&D in fiscal year 2011. The Homeland Security Act of 2002 established S&T within DHS and provided it with responsibility for, among other things: conducting basic and applied research, development, demonstration, and testing and evaluation activities relevant to any or all elements of DHS; establishing and administering the primary R&D activities of the department, including the long-term research and development needs and capabilities for all elements of the department; and coordinating and integrating all research, development, demonstration, testing, and evaluation activities of the department. S&T has six technical divisions responsible for managing S&T’s R&D portfolio and coordinating with other DHS components to identify R&D priorities and needs. As of September 2012, S&T had approximately 79 active R&D projects. Most of S&T’s R&D portfolio consists of applied and development R&D projects for its DHS customers. It also conducts other projects for additional customers, including other federal agencies, first responders, and industry, among others. These divisions are the Borders and Maritime Division, Chemical/Biological Defense Division, Cyber Security Division, Explosives Division, Human Factors/ Behavioral Sciences Division, and the Infrastructure Protection and Disaster Management Division. In addition, S&T’s First Responder Group (FRG) identifies, validates, and facilitates the fulfillment of first responder requirements through the use of existing and emerging technologies, knowledge products, and the development of technical standards, according to S&T FRG officials. In addition to S&T, DNDO and the Coast Guard conduct R&D activities. After its establishment in 2005, DNDO assumed responsibility from S&T for certain nuclear and radiological R&D activities. DNDO is the primary federal organization responsible for developing, acquiring, and supporting the deployment of an enhanced domestic system to detect and report on attempts to import, possess, store, transport, develop, or use an unauthorized nuclear explosive device, fissile material, or radiological material in the United States.estimated that they have 30 R&D projects and plan to obligate $75.9 million for R&D in fiscal year 2012. According to Coast Guard officials, the Coast Guard R&D Center conducts R&D projects to support the Coast Guard’s priorities, primarily focusing on maritime safety-related projects. As of August 2012, Coast Guard officials estimated that they have 60-70 applied research projects and have spent about $30 million on R&D in fiscal year 2012 so far. In 2010, Congress directed us to identify programs, agencies, offices, and initiatives with duplicative goals and activities within departments and government-wide and report annually to Congress. In March 2011 and February 2012, we issued our first two annual reports to Congress in response to this requirement. The annual reports describe areas in which we found evidence of fragmentation, overlap, or duplication among federal programs. Using the framework established in our prior work on addressing fragmentation, overlap, and duplication, we use the following definitions for the purpose of assessing DHS’s R&D efforts: Fragmentation occurs when more than one federal agency (or more than one organization within an agency) is involved in the same broad area of national interest. Overlap occurs when multiple programs have similar goals, engage in similar activities or strategies to achieve those goals, or target similar beneficiaries. Overlap may result from statutory or other limitations beyond the agency’s control. Duplication occurs when two or more agencies or programs are engaging in the same activities or providing the same services to the same beneficiaries. DHS does not know how much all of its components invest in R&D, making it difficult to oversee R&D efforts across the department. According to DHS budget officials, S&T, DNDO, and the Coast Guard are the only components that conduct R&D and, according to our analysis, they are the only components that report budget authority, obligations, or outlays for R&D activities to OMB as part of the budget process. However, we identified an additional $255 million in R&D obligations by other DHS components. Further, we found that DNDO did not report certain R&D budget data to OMB, and R&D budget accounts include a mix of R&D and non-R&D spending, further complicating DHS’s ability to identify its total investment in R&D. Our analysis of the data that DHS submitted to OMB found that DHS’s R&D obligations were underreported because other DHS components obligated money for R&D contracts that were not reported to OMB as R&D. Specifically, for fiscal year 2011, our analysis identified $255 million in obligations for R&D that DHS did not report as R&D contracts in the object classification tables. These obligations included DHS components providing S&T with funding to conduct R&D on their behalf and components obligating funds through contracts directly to industry, universities, or with DOE’s national laboratories for R&D. Specifically: S&T reported receiving $50 million in reimbursements from other DHS components, such as U.S. Citizenship and Immigration Services, the Secret Service, the Office of Health Affairs, Customs and Border Protection (CBP), and the Transportation Security Administration (TSA) to conduct R&D projects. These obligations were not identified as R&D in these components’ budgets. Our analysis identified 10 components, including CBP, TSA, U.S. Immigration and Customs Enforcement (ICE), and the Federal Emergency Management Agency (FEMA), that obligated approximately $55 million for R&D contracts that were not reported as R&D. Our analysis identified that DHS components, outside of S&T, DNDO, and the Coast Guard, obligated $151 million to DOE national laboratories for R&D-related projects (44 percent of total DHS spending at the national laboratories in fiscal year 2011). For example, the National Protection and Programs Directorate (NPPD) obligated $83 million to DOE national laboratories in fiscal year 2011 (see app. III for R&D obligations by component). Our analysis of the data that DHS submitted to OMB also showed that DHS’s R&D budget authority and outlays were underreported because DNDO did not properly report its R&D budget authority and outlays to OMB for fiscal years 2010 through 2013. Specifically, for fiscal years 2010 through 2013, DHS underreported its total R&D budget authority by at least $293 million and outlays for R&D by at least $282 million because DNDO did not accurately report the data. In fiscal year 2011, S&T and the Coast Guard reported $512 million in R&D budget authority and $752 million in outlays, but DNDO did not report $56 million in R&D budget authority or $80 million in outlays. DNDO officials gave us the data for the missing years as depicted in figure 1 along with S&T and Coast Guard data. DNDO budget officials told us that they are aware of the omission and confirmed that the OMB submission will be corrected in fiscal year 2013. DHS budget officials agreed that DHS underreported its R&D spending and when asked, could not provide a reason why the omission was not flagged by DHS review. In addition, within S&T, the Coast Guard, and DNDO, it is difficult to identify all R&D funding because their R&D budget accounts fund both R&D and non-R&D investments. For fiscal year 2011, we estimated that 78 percent of S&T’s Research, Development, Acquisition, & Operations account, 51 percent of DNDO’s “Research, Development, & Operations” account, and 43 percent of the Coast Guard’s R&D budget account fund R&D activities. Figure 2 provides the various S&T, DNDO, and Coast Guard budget accounts and budget activities and what percentage of each account was obligated for R&D in fiscal year 2011. DHS’s budget director recognized that spending in areas that cut across the department, like R&D, are difficult to manage and told us that DHS does not have oversight of R&D across the department. DHS is taking some steps to address this, including identifying R&D as a budget line in DHS’s proposed unified account structure, which was submitted to Congress in the fiscal year 2013 budget for approval. In 2007, we reported that appropriators rely on budget exhibits to inform the decision to authorize and appropriate funds for many programs; thus, accurate classifications of program and projects by budget activity are needed for decision makers to readily understand how projects are progressing and how money is being spent. Specifically regarding R&D, we reported that decision makers use the Department of Defense’s (DOD) budget reports, which detail a project’s stage of development, to assess how much is being invested in fundamental science and technology and to determine the future capabilities of U.S. military forces. DHS does not have a departmentwide policy defining R&D or guidance directing components how to report R&D activities. As a result, it is difficult to identify the department’s total investment in R&D, which limits DHS’s ability to oversee components’ R&D efforts and align them with agencywide R&D goals and priorities. DHS officials told us that DHS uses OMB’s definition of R&D, but the definition is broad and its application may not be uniform across components, and thus, R&D investments may not always be identified as R&D. For example, DHS officials told us that test and evaluation is generally not considered R&D because the purpose is to test how an existing technology fits into an operational environment. However, S&T’s Chief Financial Officer (CFO) told us that S&T reports test and evaluation activities as part of its R&D budget authority. Further, DHS officials told us that there is no distinct line between capital investments and the R&D for technology development. For example, NPPD officials told us they consider its cybersecurity system to be a capital investment, and not R&D, but they consider R&D of new technologies as an important aspect of this system. The variation in R&D definitions may contribute to the unreliability of the reporting mechanisms for R&D investments in budget development and execution, as discussed above. Standards for Internal Control in the Federal Government state that policies and mechanisms are needed to enforce management’s directives, such as the process of adhering to requirements for budget development and execution and to ensure the reliability of those and other reports for internal and external use. Additionally, we previously reported that agencies can enhance and sustain their collaborative efforts by defining and articulating a common outcome and establishing compatible policies, procedures, and other means to operate across agency boundaries. Such definitions could help DHS better identify its R&D investment. DOD FMR, DoD 7000.14-R, Volume 2B, Chapter 5. appropriation accounts for R&D activities. However, those reports include R&D that is reported as R&D obligations in the budget process and do not provide financial details for the R&D investments made by components other than S&T, DNDO, and the Coast Guard, as described earlier in this report. The challenges DHS faces in managing its R&D efforts are similar to the challenges the department has faced in managing its acquisitions. In September 2008, we reported that DHS had not integrated the acquisition function across the department and did not have adequate oversight of all of its acquisition programs. DHS officials agreed with our findings and the agency has taken steps to implement policies and guidance to ensure that components follow consistent acquisition practices and that a process exists to oversee acquisition programs, as outlined in Acquisition Management Directive 102-01 (AMD 102-01). Officials at DHS’s Program Accountability and Risk Management office (PARM) agreed the department has not developed policies or guidance on how components should define and oversee R&D investments and efforts. They stated that they are in the process of updating AMD 102-01 to include additional sections pertaining to nonacquisition investments and that such R&D policy and guidance could be incorporated into such updates in the future. (See App. IV for an illustration of how R&D supports all four phases of DHS’s Acquisition Life Cycle as defined by AMD 102-01). Such an update could establish policy and guidance for defining R&D consistently across the department and outline the processes and procedures for overseeing R&D, which would provide more oversight into the R&D investments across the department. S&T has coordinated R&D efforts across DHS to some extent, but the department’s R&D efforts are fragmented and overlapping, which increases the risk of unnecessary duplication. We identified 35 instances of overlap among contracts that DHS components awarded for R&D projects, but did not identify instances of duplication among these contracts. Additionally, DHS has not developed a policy defining who is responsible for coordinating R&D and what processes should be used to coordinate it, and S&T does not have mechanisms to track all R&D activities at DHS. Developing a policy defining the roles and responsibilities for coordinating R&D, and establishing coordination processes and a mechanism to track all R&D projects could help DHS mitigate existing fragmentation and overlap, and reduce the risk of unnecessary duplication. The Homeland Security Act of 2002, among other things, requires that S&T coordinate and integrate all research, development, demonstration, testing, and evaluation activities within DHS and establish and administer the primary R&D activities of the department. To carry out these responsibilities, S&T developed coordination practices that fall into four general categories: (1) S&T component liaisons, (2) R&D agreements between component heads and S&T, (3) joint R&D strategies between S&T and components, and (4) various R&D coordination teams made up of S&T and component project managers. S&T officials stated that one of the primary ways that S&T mitigates the risk of overlap and duplication is through component liaisons staffed at S&T and S&T officials staffed at component agencies. Component liaisons became a primary coordination mechanism under the former Under Secretary who requested a Coast Guard official to work at S&T as a deputy division director. According to S&T officials, these component liaisons have been integral to S&T’s coordination efforts. As of July 2012, S&T had eight liaisons from TSA, CBP, ICE, NPPD, the Secret Service, and the Coast Guard. In addition, S&T had seven employees detailed to other components, including CBP, the Secret Service, DHS’s Office of Policy, DHS’s Tactical Communications Program Office, DNDO, and TSA, as well as two liaisons at FEMA and DHS’s Office of the Chief Financial Officer. According to S&T, liaisons help S&T maintain communication with components on R&D needs and related activities. For example, CBP requested an S&T liaison to provide technical expertise to its acquisition division. However, S&T does not have liaisons with every component. S&T signed agreements with two components—CBP and the Secret Service—to help coordinate R&D activities. Under those agreements, S&T is working with the components on high-level “Apex projects” that are intended to solve components’ strategic operational problems within 2 years. For example, S&T and the Secret Service have an Apex project called the Science and Technology Operational Research and Enhancement project that was initiated in June 2010 to provide technology solutions for the Secret Service to define, establish, and document the near- and long-term R&D strategy for the protection of national leaders, visiting heads of state and government, designated sites, and national special security events. S&T officials stated that the Apex project required development and testing of about seven technologies which the Secret Service plans to incorporate into its operations. As of July 2012, S&T officials reported that all seven technologies were in the developmental stage and will undergo testing in late 2012. For the CBP Apex project, S&T is overseeing the development and evaluation of new technology and infrastructure to help CBP create Secure Transit Corridors. S&T officials stated that, as of July 2012, the project was on track to be completed in 1 year. S&T officials stated that it can accommodate only three or four Apex projects at any given time because of the time and resources required, but that it anticipates starting future Apex projects with FEMA and ICE. As a result, these high-level partnerships are not intended to address all customer needs at DHS. Further, S&T provided us with three memorandums of agreement it entered into with DHS components as a means to coordinate R&D efforts. Specifically, S&T has agreements with CBP to develop a rapid response prototype, the Coast Guard to develop a test bed, and TSA to coordinate the transition of the Transportation Security Laboratory from TSA to S&T which, was completed in 2006. S&T is also currently working with TSA on an Aviation Security agreement that is to result in S&T supporting TSA in various areas (as outlined in the agreement) and providing technology to address capability gaps. S&T plans to initiate similar partnerships first with CBP, then with ICE and FEMA. S&T also works with DHS components to ensure that it meets their R&D needs by signing technology transition agreements (TTA) to ensure that components use the technologies S&T develops. S&T has 42 TTAs with DHS components. For example, TSA agreed to integrate automated intent detection technologies to better detect unknown threats before they enter the country into its behavior detection-screening program once S&T successfully demonstrated that the technologies met performance requirements. Additionally, the U.S. Citizenship and Immigration Service (USCIS) agreed to deploy rapid DNA-based screening technologies to determine kinship to use in the refugee and asylum eligibility determination process upon S&T demonstrating that the technology meets certain performance criteria. According to S&T officials, none of these TTAs has yet resulted in a technology being transitioned from S&T to a component. In March 2011, S&T and TSA issued a joint R&D strategy for aviation security that identifies TSA’s R&D priorities. That plan was a result of an internal planning process that prioritized capability gaps and focused on the work between TSA and S&T’s Explosives and Human Factors/Behavioral Sciences Divisions. According to TSA officials, the joint R&D strategic plan does not represent a TSA-wide R&D strategy because it does not include surface transportation security capability gaps. Rather, the officials said that TSA uses the National Infrastructure Protection Plan and an R&D working group with S&T to identify those capability gaps. S&T officials stated that it is currently updating its R&D Strategy with TSA. S&T is also planning to work with the Secret Service, CBP, ICE, and FEMA to build component-specific R&D strategies that are linked to component acquisition programs. S&T’s previous Under Secretary instituted the Capstone Integrated Product Teams (IPT) process to coordinate R&D efforts between S&T and components. IPTs served as S&T’s primary mechanism for coordinating R&D and consisted of members from S&T and component agencies. In S&T’s 5-year R&D plan for fiscal years 2008 through 2013, S&T identified 12 IPTs, each of which was focused on a different topic and brought together decision makers from DHS components and S&T, as well as end users of technologies. Additionally, the IPT process included teams to coordinate R&D at the project level among S&T and components. IPTs solicited input from components to identify and address technology gaps and needs and were intended to assist operational units in making decisions about technology investments, based on S&T’s understanding of technology and the state of applicable technology solutions. For example, members of the cargo security IPT determined that the capability gap that should be addressed was enhancing cargo screening and examination systems through detecting or identifying terrorist contraband items, like drugs or illegal firearms. As a result, S&T identified CanScan, a nonintrusive inspection system as a means for addressing that gap. We interviewed directors of divisions responsible for coordinating R&D activities throughout the department. These included Borders and Maritime Division, Chemical and Biological Division, Cyber Security Division, Explosives Division, Human Factors/ Behavioral Sciences Division, and Infrastructure Protection Division. focus on components’ operational needs.directors, these new teams are not yet fully implemented and they are still using established relationships with components through the IPT process to identify components needs and coordinate R&D. Additionally, S&T still maintains IPTs with TSA on surface transportation. R&D at DHS is inherently fragmented because several components within DHS—S&T, the Coast Guard, and DNDO—were each given R&D responsibilities in law, and other DHS components may pursue and conduct their own R&D efforts as long as those activities are coordinated through S&T. Fragmentation among R&D efforts at DHS may be advantageous if the department determines that it could gain better or faster results by having multiple components engage in R&D activities toward a similar goal; however, it can be disadvantageous if those activities are uncoordinated or unintentionally overlapping or duplicative. To illustrate overlap and the potential for unnecessary duplication, we reviewed data on about 15,000 federal procurement contract actions coded as R&D taken by DHS components from fiscal years 2007 through 2012. See appendix 1 for details on our methodology for identifying overlap. Of those, we identified 50 R&D contracts issued by six DHS components—S&T, TSA, FEMA, the Office of Health Affairs (OHA), the Coast Guard, and CBP—that appeared to have similar activities with another contract and interviewed component officials about those R&D activities. We obtained 47 of those 50 contracts and reviewed their statements of work. On the basis of that analysis and our interviews with components, we identified 35 instances of overlap where components awarded R&D contracts that overlapped with R&D activities conducted elsewhere in the department. We also found that DHS did not have tracking mechanisms or policies to help ensure that this overlap be avoided and better coordinated. For example: S&T awarded four separate contracts to develop methods of detecting ammonium nitrate and urea nitrate for the counter-IED program. TSA also awarded a contract to a private vendor to investigate the detection of ammonium nitrate and ammonium nitrate-based explosives. These contracts were similar in that they all addressed the detection of the same chemical. S&T awarded four separate contracts to develop advanced algorithms for explosives detection, while TSA also awarded contracts to develop algorithms to evaluate images for explosives. We determined that these R&D contracts overlapped because both components were involved in developing algorithms for explosives detection. S&T awarded a contract to a private vendor for support and analysis for seismic hazards, while FEMA also awarded a contract to a private vendor to develop seismic guidelines for buildings in the event of an earthquake. These contracts overlapped because they were both similar in scope. Although the contracts we selected overlapped, we determined that they were not duplicative based on our analysis and our interviews with component officials. For example, TSA officials stated that all TSA R&D contracts we requested were initially awarded when TSA still conducted transportation security-related R&D and were managed by the Transportation Security Laboratory (TSL), which moved into S&T in 2006. As a result, TSA did not have oversight into those contracts. Additionally, TSA officials stated that some of the contracts may have overlapped in the scope of work but were focused on different operational missions. S&T officials agreed with TSA, stating that some of this overlap occurred during a period of time when TSA was still conducting R&D through TSL and during a time when S&T did not have the level of contract oversight that it has now. FEMA officials stated that FEMA’s research projects are related to earthquake hazards, rather than to multiple hazards like S&T’s research projects. They stated that FEMA’s coordination with S&T is dependent on prior personal relationships rather than through an established coordination process. According to S&T officials, a process does not exist at DHS or within S&T to prevent overlap or unnecessary duplication but that relationships with components mitigate that risk. They also stated that S&T has improved interactions with components over time. For example, S&T officials stated that when CBP requested mobile radios to improve communication among its field staff, S&T knew that the Secret Service and ICE were already working in that area. To address this technology need, S&T provided a senior official to lead the Tactical Communication Team to address communication among different operational components and better coordinate those efforts. In conducting this analysis, we recognize that overlapping R&D activities across similar areas may not be problematic. However, the existence of overlapping R&D activities coupled with the lack of policies and guidance defining R&D (as mentioned previously) and coordination processes is an indication that not all R&D activities at DHS are coordinated to ensure that R&D is not unnecessarily duplicative. As a result, DHS could increase oversight of R&D, and improve coordination of R&D activities to ensure that any duplication in R&D activities is purposeful rather than unnecessary, as discussed later in this draft. Overlap and the associated risk of unnecessary duplication occur throughout the government, as we However, when have reported previously, and are not isolated to DHS.coupled with consistent programmatic coordination, the risk of unnecessary duplication can be diminished. DHS and S&T do not have the policies and mechanisms necessary to coordinate R&D across the department and reduce the risk of unnecessary duplication. First, as noted earlier in this report, DHS does not have the policies and guidance necessary to define and oversee R&D investments across the department. While S&T has taken steps to coordinate R&D, DHS has not developed a policy defining who is responsible for coordinating R&D and what processes should be used to coordinate it. Specifically, while S&T has R&D agreements with some components, S&T officials rely on the former IPT process to coordinate with components. For example, S&T division directors cited the IPT process and personal relationships as the primary means to coordinate R&D activities with components and generally felt that they were coordinating effectively. However, other component officials we interviewed did not view S&T’s coordination practices as positively. Specifically, we interviewed six components to discuss the extent to which they coordinated with S&T on R&D activities. Four components stated that S&T did not have an established process that detailed how S&T would work with its customers or for coordinating all activities at DHS. For example, one component stated that S&T has conducted R&D that it thought would address the component’s operational need but, when work was completed, the R&D project did not fit into the operational environment to meet the component’s needs. In addition, without an established coordination process, the risk for unnecessary duplication increases, because components can engage in R&D activities without coordinating them through S&T (see fig. 3). Standards for Internal Control in the Federal Government states that policies and procedures ensure that the necessary activities occur at all levels and functions of the organization—not just from top-level leadership. This ensures that all levels of the organization are coordinating effectively and as part of a larger strategy. Additionally, internal control standards provide that agencies should communicate necessary information effectively by ensuring that they are communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. DHS and S&T could be in a better position to coordinate the department’s R&D efforts by implementing a specific policy outlining R&D roles and responsibilities and processes for coordinating R&D. Furthermore, S&T and DHS have not developed a mechanism to track all ongoing R&D projects conducted across DHS components. Specifically, neither DHS nor S&T tracks all ongoing R&D projects across the department, including R&D activities contracted through the national laboratories. The Homeland Security Act of 2002 gave DHS the authority to use DOE laboratories to conduct R&D and established S&T’s Office of National Laboratories (ONL) to be responsible for coordinating and using the DOE national laboratories. Additionally, DHS Directive 143 further directs ONL to serve as the primary point of contact to recommend contracting activity approval for work by the national laboratories, and review all statements of work issued from DHS and directed to the national laboratories. According to S&T, the purpose of that review is to ensure the proposed work is within the scope, and complies with the terms and conditions, of the prime contract between DOE and the national laboratories. We identified 11 components that reimbursed the national laboratories for R&D between fiscal years 2010 and 2013, but ONL could not provide us with any information on those activities and told us it did not track them.information on activities across the department is limited by components inconsistently operating within the DHS Directive 143 process for working with the national laboratories. According to the Director of ONL, to identify activities not reported through the DHS Directive 143 process, S&T uses other means such as relationships with components and S&T, as well as reviewing task orders sent to the laboratories from DHS, visiting laboratories, and laboratories self-reporting their work to ONL. According to S&T, ONL's ability to provide We previously reported in 2004 that DHS faced challenges using DOE’s laboratories and balancing the immediate needs of users of homeland security technologies with the need to conduct R&D on advanced technologies for the future. DHS agreed with our recommendation to create a strategic R&D plan to identify and develop countermeasures to chemical, biological, radiological, nuclear, and other emerging terrorist threats and to ensure that it detailed how DHS would work with other federal agencies to establish governmentwide priorities, identify research gaps, avoid duplication of effort, and leverage resources. DHS noted that such a plan was critical to the success of the department, and stated that S&T would complete a strategic planning process in 2004 that would be reviewed and updated annually. To date, DHS has not yet developed a departmentwide strategic plan for managing R&D, although S&T has developed its own plan. Standards for Internal Control in the Federal Government states that controls are needed to provide reasonable assurance that, among other things, reliable data are obtained, maintained, and fairly disclosed in reports and agencies comply with laws and regulations. In addition, in June 2010, we reported that R&D information should be tracked in a consolidated database in order to fully coordinate cybersecurity R&D activities to provide essential information about ongoing and completed R&D. We recommended that the Director of the Office of Science and Technology Policy (OSTP) direct its subcommittee on Networking and Information Technology Research and Development to exercise its leadership responsibilities by, among other things, establishing and using a mechanism to keep track of federal cybersecurity R&D funding. OSTP agreed with our recommendation. Additionally, we previously reported that agencies can enhance and sustain their collaborative efforts by, among other things, agreeing on roles and responsibilities and developing mechanisms to monitor, evaluate, and report on results. DHS officials agreed that such mechanisms to track R&D activities were necessary, and said they have faced similar challenges in managing investments across the department. DHS has attempted to address those challenges by, among other things, creating a database called the Decision Support Tool that is intended to improve the flow of information from component program offices to the DHS Management Directorate to support its governance efforts. The Decision Support Tool could provide an example of how DHS could better track ongoing R&D projects occurring in the department. DHS’s PARM officials stated that they recently added new data fields to capture more detailed information on component activities, such as additional financial data, at a low cost to DHS, and that such data fields could be added to collect information and track R&D activities across DHS, such as contracts with private companies or universities and the associated costs. However, we reported in March 2012 that DHS executives were not confident enough in the data to use the Decision Support Tool to make acquisition decisions, and that DHS’s plans to improve the quality of the data in this database were limited. We also reported that DHS had limited plans to improve the quality of the data because PARM only planned to check the data quality in preparation for key milestone meetings in the acquisition process. That could significantly diminish the Decision Support Tool’s value, because users cannot confidently identify and take action to address problems meeting cost or schedule goals prior to program review meetings. As a result, improvements to the Decision Support Tool’s data quality before expanding its use could improve the collecting and tracking of R&D information and could be used as an example of how to better track information occurring across components. DHS is taking actions to address the limitations to the Decision Support Tool’s data quality by working to validate the Decision Support Tool’s associated acquisition data. A policy that defines roles and responsibilities for coordinating R&D and coordination processes, as well as a mechanism that tracks all DHS R&D projects, could better position DHS to mitigate the risk of overlapping and unnecessarily duplicative R&D projects. Conducting R&D on technologies is a key component of DHS’s efforts to detect, prevent, and mitigate terrorist threats and is vital to enhancing the security of the nation. Multiple entities across DHS conduct various types of R&D in pursuit of their respective missions, but DHS does not have a department-wide policy defining R&D or guidance directing components how to report R&D activities and investments. As a result, DHS does not have the ability to maintain oversight of its total investment in R&D across the department, which also limits its ability to oversee components’ R&D efforts and align them with agencywide R&D goals and priorities. Establishing policies and guidance for defining R&D across the department and outlining the processes and procedures for overseeing R&D would provide more oversight of R&D investments across the department. Furthermore, DHS has taken some steps to coordinate R&D efforts across the department, but does not have a cohesive policy defining roles and responsibilities for coordinating R&D and mechanisms to track all DHS R&D projects. A policy that defines roles and responsibilities for coordinating R&D and coordination processes, as well as a mechanism that tracks all DHS R&D projects, could better position DHS to mitigate the risk of overlapping and unnecessarily duplicative R&D projects. To help ensure that DHS effectively oversees its R&D investment and efforts and reduces fragmentation, overlap, and the risk of unnecessary duplication, we recommend that the Secretary of Homeland Security develop and implement policies and guidance for defining and overseeing R&D at the department. Such policies and guidance could be included as an update to the department’s existing acquisition directive and should include the following elements: a well-understood definition of R&D that provides reasonable assurance that reliable accounting and reporting of R&D resources and activities for internal and external use are achieved, a description of the department’s process and roles and responsibilities for overseeing and coordinating R&D investments and efforts, and a mechanism to track existing R&D projects and their associated costs across the department. We provided a draft of this report to DHS for its review and comment. DHS provided written comments, which are reproduced in full in appendix V, and concurred with our recommendation. DHS also described actions it plans to take to address the recommendation. Specifically, according to DHS, it plans to evaluate the most effective path forward to guide uniform treatment of R&D across the department in compliance with OMB rules and is considering a management directive, multi component steering committee, or new policy guidance to help better oversee and coordinate R&D. DHS plans to complete these efforts by May 1, 2013. Such actions should address the overall intent of our recommendation. However, it will be important that whatever DHS chooses to do, its actions address the specific elements we outlined in our recommendation, including developing a definition of R&D, defining roles and responsibilities for oversight and coordination, and developing a mechanism to track existing R&D projects and investments. DHS also provided written technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Homeland Security, appropriate congressional committees, and other interested parties. This report is also available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9627 or maurerd@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. This report answers the following questions: 1. How much does the Department of Homeland Security (DHS) invest in research and development (R&D) and to what extent does it have policies and guidance for defining R&D and overseeing R&D resources and efforts across the department? 2. To what extent is R&D coordinated within DHS to prevent overlap, fragmentation, and unnecessary duplication across the department? To determine how much DHS invests in R&D and the extent that it has policies and guidance for defining R&D and overseeing R&D resources and efforts across the department, we reviewed DHS’s budget and congressional budget justifications to identify R&D investments reported from fiscal years 2011 through 2013. We analyzed R&D budget authority, outlays, and obligations included in budget submissions to the Office of Management and Budget (OMB) reported for fiscal years 2010 through 2013. We also analyzed Science and Technology Directorate (S&T), Domestic Nuclear Detection Office (DNDO), and Coast Guard budgets to identify obligations for R&D funded by non-R&D budget activities as identified in object class tables that present obligations by the items or services purchased (e.g. personnel compensation and benefits, contractual services and supplies, acquisition of assets, grants and fixed charges). In addition, we assessed DHS’s management and oversight of its R&D spending against criteria in GAO’s Standards for Internal Control in the Federal Government. We analyzed data from the Federal Procurement Data System Next Generation (FPDS-NG) to identify R&D-related contracts across DHS for fiscal years 2007 through 2011. We filtered these contracts to include only those R&D stages coded as basic research, applied research, and exploratory development and advanced development, which align more closely with recognized definitions of R&D. We excluded the other four stages (engineering development, operational systems development, management/support, and commercialization) of R&D because these activities are linked more closely to procurements rather than R&D activities. We also analyzed data from the Department of Energy’s (DOE) national laboratories from fiscal years 2010 through 2012 to identify how much DHS components obligated for R&D-related work at the national labs. To determine the extent that R&D is coordinated within DHS to prevent overlap, fragmentation, and unnecessary duplication, we Reviewed component R&D plans and project documentation. We also reviewed department and S&T division strategic plans. Interviewed officials from DHS, DNDO, the Coast Guard, the Transportation Security Administration (TSA), the Office of Health Affairs (OHA), U.S. Customs and Border Protection (CBP), the National Protection and Programs Directorate (NPPD), and the Secret Service to discuss, among other things, their R&D efforts, R&D budgets, and coordination with S&T. We interviewed DHS budget and acquisition oversight officials to discuss how DHS oversees and manages its R&D resources. Interviewed S&T’s budget official and Homeland Security Advanced Research Projects Agency (HSARPA) officials, including directors from each of the six technical divisions, to discuss how they coordinated with components and prioritized R&D resources. Used a data-collection instrument to collect information on S&T R&D projects, associated costs of R&D projects, and division customers from each HSARPA director and interviewed the Director of S&T’s Office of National Laboratories, responsible for coordinating S&T and DHS’s R&D work conducted at the DOE national laboratories to discuss DHS’s spending at and use of these laboratories. Compared DHS’s coordination efforts against the relevant legislation and criteria, including federal internal control standards as well as GAO’s recommended practices for collaboration and coordination to identify efforts to meet certain provisions and potential areas for improvement. To seek examples of potential overlap and duplication, we Reviewed data on about 15,000 federal procurement contract actions coded as R&D in the Federal Procurement Data System Next Generation (FPDS-NG) made by DHS components from fiscal years 2007 through 2012 to identify contracts that were potentially overlapping or duplicative of other contracts issued by different components. This was the total number of DHS contract actions taken from fiscal years 2007 through 2011. Established 32 key words based on our knowledge of the likely areas of overlapping R&D related to component missions in order to identify areas where components may have issued contracts that were similar in scope and to eliminate areas where duplicative activities were likely to be present but acceptable (e.g., personnel support and management services). We searched for the key words in the FPDS- NG data set to identify contracts containing the same key words issued by more than one component. Independently analyzed the contract descriptions and identified 50 R&D contracts issued by six components—S&T, the Coast Guard, TSA, CBP, OHA, and the Federal Emergency Management Agency (FEMA)—that appeared to overlap and interviewed officials from those components to discuss the nature of those contracts. Obtained 47 out of 50 contracts and analyzed each contract’s statement of work and objectives to determine the type of R&D activity and to identify whether each contract was overlapping or duplicative of any of the other 46 contracts. Two analysts independently reviewed each contract and then came to agreement regarding the presence of overlap and duplication. We could not determine the full extent of duplication or overlap in the department, because the FPDS-NG data system captures only a portion of the total R&D activities occurring at DHS and we did not review the documentation for, or conduct a random sample of, all 15,000 R&D contract actions. However, the results from our analysis illustrate overlap and the potential for unnecessary duplication. We also used our past work on fragmentation, overlap, and duplication across the federal government; Standards for Internal Control in the Federal Government; and our prior reports to assess DHS’s coordination of R&D across the department. We assessed the reliability of the data we used by reconciling the data with published data and applicable quality control procedures to maintain the integrity of the data, and interviewing DHS budget and procurement officials responsible for overseeing the data systems. In addition, we reviewed available FPDS-NG documentation, such as the user manual, and OMB guidance to identify related quality control mechanisms. We also assessed the reliability of data on DOE’s national laboratory work for others by interviewing DOE officials responsible for compiling and reporting those data. We concluded that these data were sufficiently reliable for the purposes of this report. We conducted this performance audit from September 2011 through September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Basic research Systematic study directed toward fuller knowledge or understanding of the fundamental aspects of phenomena and observable facts without specific applications toward processes or products. Applied research Systematic study to gain knowledge or understanding necessary to determine the means by which a recognized and specific need may be met. Development Systematic application of knowledge or understanding, directed toward the production of useful materials, devices, and systems or methods, including design, development, and improvement of prototypes and new processes to meet specific requirements. Systematic study to gain knowledge or understanding of the fundamental aspects of phenomena and observable facts without specific applications toward processes or products. Systematic study to gain knowledge or understanding necessary for determining the means by which a recognized and specific need may be met. Systematic use of the knowledge and understanding gained from research for the production of useful materials, devices, systems, or methods, including the design and development of prototypes and processes. Research directed toward increasing knowledge in science with the primary aim being a fuller knowledge or understanding of the subject under study, rather than any practical application of that knowledge. The effort that (1) normally follows basic research, but may not be severable from the related basic research; (2) attempts to determine and exploit the potential of scientific discoveries or improvements in technology, materials, processes, methods, devices, or techniques; and (3) attempts to advance the state of the art. Systematic use of scientific and technical knowledge in the design, development, testing, or evaluation of a potential new product or service (or of an improvement in an existing product or service) to meet specific performance requirements or objectives. Systematic study directed toward fuller knowledge or understanding of the fundamental aspects of phenomena and observable facts without specific applications toward processes or products. Systematic study to gain knowledge or understanding necessary to determine the means by which a recognized and specific need may be met. Systematic application of knowledge or understanding, directed toward the production of useful materials, devices, and systems or methods, including design, development, and improvement of prototypes and new processes to meet specific requirements. (1) Systematic study directed toward greater knowledge or understanding of the fundamental aspects of phenomena and of observable facts without specific applications toward processes or precuts in mind. It is farsighted high payoff research that provides the basis for technological progress. (2) Systematic study to understand the means to meet a recognized and specific need. Systematic expansion and application of knowledge to develop useful materials, devices, and systems or methods. May be oriented, ultimately, toward the design, development, and improvement of prototypes and new processes to meet general mission area requirements. Applied research may translate promising basic research into solutions for broadly defined military needs, short of system development. In addition to the contact named above, Chris Currie, Assistant Director, and Gary Malavenda, Analyst-in-Charge, managed this assignment. Emily Gunn and Margaret McKenna made significant contributions to this report. Also contributing to this report were Katherine Davis, Michele Fejfar, Eric Hauswirth, Carol Henn, Richard Hung, Julia Kennon, Tracey King, Nate Tranquilli, Katherine Trimble, and Sarah Veale.
Conducting R&D on technologies for detecting, preventing, and mitigating terrorist threats is vital to enhancing the security of the nation. Since its creation, DHS has spent billions of dollars researching and developing technologies used to support its missions including securing the border, detecting nuclear devices, and screening airline passengers and baggage for explosives, among others. Within DHS, S&T conducts R&D and is the component responsible for coordinating R&D across the department, but other components, such as the Coast Guard and DNDO, also conduct R&D to support their respective missions. GAO was asked to identify (1) how much DHS invests in R&D and the extent to which DHS has policies and guidance for defining R&D and overseeing R&D resources and efforts across the department, and (2) the extent to which R&D is coordinated within DHS to prevent overlap, fragmentation, or unnecessary duplication. GAO reviewed information on DHS R&D budgets, contracts, and DHS spending on R&D at DOE national laboratories for fiscal years 2010 through 2012. GAO also reviewed DHS R&D plans and project documentation, and interviewed DHS headquarters and component officials. The Department of Homeland Security (DHS) does not know the total amount its components invest in research and development (R&D) and does not have policies and guidance for defining R&D and overseeing R&D resources across the department. According to DHS, its Science & Technology Directorate (S&T), Domestic Nuclear Detection Office (DNDO), and U. S. Coast Guard are the only components that conduct R&D and, according to GAO’s analysis, these are the only components that report budget authority, obligations, or outlays for R&D activities to the Office of Management and Budget (OMB) as part of the budget process. However, GAO identified an additional $255 million in R&D obligations by other DHS components. For example, S&T reported receiving $50 million in reimbursements from other DHS components to conduct R&D. Further, 10 components obligated $55 million for R&D contracts to third parties and $151 million to Department of Energy (DOE) national laboratories for R&D-related projects, but these were not reported as R&D to OMB. According to DHS, it is difficult to identify all R&D investments across the department because DHS does not have a department wide policy defining R&D or guidance directing components how to report all R&D spending and activities. As a result, it is difficult for DHS to oversee components’ R&D efforts and align them with agency wide R&D goals and priorities. Developing specific policies and guidance could assist DHS components in better understanding how to report R&D activities, and better position DHS to determine how much the agency invests in R&D to effectively oversee these investments. S&T has taken some steps to coordinate R&D efforts across DHS, but the department's R&D efforts are fragmented and overlapping, which increases the risk of unnecessary duplication. R&D at DHS is inherently fragmented because S&T, the Coast Guard, and DNDO were each given R&D responsibilities in law, and other DHS components may pursue and conduct their own R&D efforts as long as those activities are coordinated through S&T. S&T uses various mechanisms to coordinate its R&D efforts including component liaisons, component R&D agreements, joint R&D strategies, and integrated R&D product teams composed of S&T and component officials. However, GAO identified 35 instances of overlap among contracts that DHS components awarded for R&D projects. For example, S&T and the Transportation Security Administration both awarded overlapping contracts to different vendors to develop advanced algorithms to detect the same type of explosive. While GAO did not identify instances of unnecessary duplication among these contracts, DHS has not developed a policy defining who is responsible for coordinating R&D and what processes should be used to coordinate it, and does not have mechanisms to track all R&D activities at DHS that could help prevent overlap, fragmentation, or unnecessary duplication. For example, S&T did not track homeland security-related R&D activities that DHS components contracted through DOE national laboratories from fiscal year 2010 through 2013; thus, it could not provide information on those contracts. Developing a policy defining the roles and responsibilities for coordinating R&D, and establishing coordination processes and a mechanism to track all R&D projects could help DHS mitigate existing fragmentation and overlap, and reduce the risk of unnecessary duplication. GAO recommends that DHS develop policies and guidance for defining, reporting and coordinating R&D activities across the department; and that DHS establish a mechanism to track R&D projects. DHS concurred with GAO’s recommendations.
DOI’s BLM, FWS, NPS, and Reclamation, and USDA’s FS manage more than 638 million acres of land in the United States, including lands in national forests, grasslands, parks, refuges, and reservoirs. These agencies manage the federal lands for multiple uses, including recreational activities such as camping and boating. To enhance visitor services while protecting natural and other resources, as well as to address concerns about the prior recreation fee program, Congress passed REA, which authorized the collection and use of recreation fees at federal lands and waters. BLM’s mission is to sustain the health, diversity, and productivity of the public lands for the use and enjoyment of present and future generations. BLM manages more than 260 million acres located primarily in 12 western states. The agency manages and issues permits for activities such as recreation, livestock grazing, timber harvesting, and mining. Recreation fees are collected under REA at about 100 BLM field offices. The mission of the FWS is to work with others to conserve, protect, and enhance fish, wildlife, and plants and their habitats for the continuing benefit of the American people. FWS manages more than 545 national wildlife refuges and 37 large, multiple-unit wetland management districts on more than 96 million acres of land throughout the nation, 69 national fish hatcheries, and 46 administrative sites. As of August 2006, recreation fees are collected under REA at 166 FWS sites. An additional 32 national wildlife refuges only sell passes. The mission of NPS is to conserve the scenery, the natural and historic objects, and the wildlife of the national park system so that they will remain unimpaired for the enjoyment of this and future generations. NPS manages 390 national park units covering more than 84 million acres in 49 states, the District of Columbia, American Samoa, Guam, Puerto Rico, Saipan, and the Virgin Islands. NPS manages many of the nation’s most precious natural and cultural resources. About 190 park units collect recreation fees such as entrance, use, and pass sales. An additional 31 units only generate revenue from the National Parks Pass and other pass sales. The mission of Reclamation is to manage, develop, and protect water and related resources in an environmentally and economically sound manner in the interest of the American public. Reclamation manages about 8.5 million acres of land associated with water projects in 17 western states. The agency delivers water and hydroelectric power through the maintenance and administration of dams and reservoirs. Currently, Reclamation has identified seven locations that meet REA requirements for collecting standard amenity fees. The mission of the USDA FS is to sustain the health, diversity, and productivity of the nation’s forests and grasslands to meet the needs of present and future generations. FS manages more than 190 million acres throughout the country. The agency manages and issues permits for activities such as skiing, livestock grazing, recreation, timber harvesting, mining, and rights-of-way for road construction. Recreation fees are collected at about 410 ranger districts in 155 national forests. BLM, FWS, NPS, and FS have had broad authority to collect recreation fees for over 40 years, first under the Land and Water Conservation Fund (LWCF) Act of 1965 and later under Fee Demo. Initially, Fee Demo authorized only a limited number of sites to charge and retain recreation fees—up to 100 sites per agency—but Congress later expanded the authority to allow any number of sites to charge and collect recreation fees. Under Fee Demo, the agencies were encouraged to be innovative in designing and collecting fees and to coordinate their fees with other federal, state, and local recreational sites. The program yielded substantial benefits for recreation sites by funding significant on-the-ground improvements. Total fee collections were about $192 million in fiscal year 2004, with about 67 percent or $129 million collected by NPS; the four agencies collected a total of over $1 billion in recreation fee revenues during the 8 years of the Fee Demo program. Nevertheless, under the demonstration program, the majority of the agencies’ funds were still provided to them through annual appropriations. Between 1998 and 2004, GAO conducted several reviews of the Fee Demo program, resulting in numerous reports and testimonies. During these reviews, we found that Fee Demo was successful in raising revenues and providing benefits to the agencies, but that improvements could be made to better the program. GAO informed Congress of several areas that needed to be addressed to ensure the program’s success. These included: (1) providing a more permanent source of revenue to supplement existing appropriations by providing the agencies with a more permanent fee authority; (2) encouraging effective coordination and cooperation among agencies and individual fee sites to better serve visitors by making the payment of fees more convenient and equitable, while at the same time reducing visitor confusion about similar or multiple fees charged at adjacent federal recreation sites; (3) providing the agencies with greater flexibility regarding fee revenue expenditures by modifying the requirement that at least 80 percent of fee revenues remain at the collection site; and (4) encouraging fee innovation through pricing structures based on extent of use or peak pricing. In 2004, Congress passed REA, in part, as a response to the suggestions and concerns documented in these previous reports. REA repealed several prior authorities such as those contained in LWCF Act, Fee Demo, and the National Parks Omnibus Management Act of 1998, which authorized national passes including the National Parks Pass. However, many of the fees currently charged under REA were first instituted under the LWCF Act or during the Fee Demo program. For fee revenues, REA provides that recreation fees collected under the act be deposited in a special fund account and remain available for expenditure without further appropriation action. REA allows the revenues to be used in a variety of ways such as for repair, maintenance, and facility enhancement; interpretation and visitor information; law enforcement; and direct operating or capital costs associated with the fee program. However, not more than an average of 15 percent of total recreation fee revenues may be used for administration, overhead, and indirect costs related to the recreation fee program. Further, REA prohibits the use of recreation fee revenues for employee bonuses or for biological monitoring under the Endangered Species Act. Both visitors to federal lands and agency officials generally support recreation fees and tout the benefits that fee revenues provide through improved facilities and services. Some assert that the recreation fee program will improve recreational opportunities and that it is a needed supplement to general fund appropriations. However, concerns about the recreation fee program continue to exist for a variety of reasons. For example, some people are concerned that the fee program under REA does not go far enough in simplifying fees, that federal lands will be overdeveloped to attract fee-paying tourists, and that the law fails to ensure that most collections will be used for the agencies’ highest priorities. Other critics continue to oppose recreation fees in concept, in large part, on the grounds that the cost of operating and maintaining federal lands should be covered by general fund appropriations and that these fees constitute a barrier to public access to federally managed lands. However, in times of budget constraints, recreation fees may provide an important source of additional funding needed to sustain agency operations. The four technical working groups formed by DOI and USDA to facilitate interagency cooperation and coordination on specific REA implementation issues have made progress. However, progress has been slow in some areas, such as resolving issues surrounding the RRACs and the new interagency pass, possibly delaying agency implementation of these aspects of the law. For example, the working group responsible for forming advisory committees, such as RRACs, has missed target dates, which has ultimately delayed the establishment of some new recreation fees. GAO has reported in the past that agencies face barriers any time they attempt to work collaboratively, but that there are key practices that can be applied to help enhance and sustain agency collaboration. For example, it is important to establish compatible policies, procedures, and other means to operate across agency boundaries. While one working group has finalized its interagency handbook, one is not planning to issue any guidelines, and two have not issued all of their interagency guidelines and agreements. For example, the working group responsible for preparing for the new interagency federal lands pass has not issued interagency guidelines outlining such details as pass eligibility requirements and distribution of costs and revenues among the agencies, which could potentially delay implementing the new pass. The RRACs/Public Participation working group has focused on establishing RRACs or utilizing existing advisory councils as part of the REA public participation requirements. These committees may make recommendations to the Secretaries of Interior and Agriculture related to public concerns about implementation of standard and expanded amenity fees or the establishment of a specific recreation fee site managed by BLM or FS, among other issues. However, the development of the RRACs has been slow, which has delayed the implementation of new fees or fee changes at some units. According to a June 2005 interagency presentation, it was expected that the RRACs would be established with members appointed by the end of 2005. Despite progress toward establishing RRACs, some tasks have taken longer than originally estimated and, as of August 2006, no state or regional RRACs are fully operational. Before the working group can move forward with many aspects of establishing RRACs— including issuing a charter and soliciting nominations for membership—or existing advisory councils can begin reviewing fee issues, an interagency agreement on implementing the RRAC requirements had to be signed. This interagency agreement, which covers issues such as the specific duties of the new RRACs and existing advisory councils, was finalized on September 1, 2006, but there is no time line for implementation, according to an agency official. In addition, other preparatory work to implement the new RRAC requirement has begun. For example, BLM and FS have begun educating existing advisory councils and the public about recreation fees and the REA public participation requirements. Because BLM and FS generally cannot create new fees or modify existing fees (per each agency’s interim policy) without the participation of RRACs, or existing advisory councils, the delay in establishing these advisory committees has prevented many units from making fee decisions. Agency officials at 26 percent of BLM and almost 38 percent of FS fee-collecting units responding to our survey, or 171 units out of 481 total units, said that the establishment of or changes to recreation fees at their units had been prevented or delayed to a moderate, great, or very great extent since the passage of REA in December 2004. For example, the Dillon Ranger District in the White River National Forest in Colorado is currently considering modifying its fee structure but has been delayed because the RRACs are not operational. Because adding new fees or increasing existing fees generally results in an overall increase in fee revenue, some units may be losing fee revenue that could be used to further enhance visitor services without functioning RRACs in place. Some units, however, were allowed to add or modify fees prior to implementation of the new RRAC requirement if the fee changes were already in progress, and public notification and participation requirements had been met. For example, about 25 new expanded amenity fees have been implemented at FS units since early 2006, most of which are for cabin rentals, according to the FS Fee Program Coordinator. In addition, in some states there are no units that currently want to add or modify fees, so the implementation of the RRAC requirements is not delaying fee changes at any units in those states. The organizational structure for the RRACs and use of existing advisory councils was approved by DOI and USDA in March 2006 via an interagency organizational agreement that established, among other things, how the REA RRAC requirement will be met in each state/region. In the majority of western states, BLM and FS will use joint RRACs or committees, many of which will be composed of existing BLM advisory councils—REA allows existing advisory committees or fee advisory boards to perform the RRAC duties. In addition, five new RRACs are being established nationwide. Appendix III outlines the organizational structure and membership requirements for the RRACs. Finally, in addition to the specific requirement for BLM and FS to establish RRACs or use existing advisory councils to review fee issues, REA has several other provisions for public participation that apply to all agencies, and these new public participation requirements have also delayed the implementation of new fees or fee changes at some units. DOI and USDA issued interagency guidelines on public participation that apply to all participating agencies in September 2005. The guidelines direct the Secretaries of Agriculture and Interior to publish a Federal Register notice for establishing each new recreation fee area 6 months prior to its establishment, as required by REA. The guidelines also direct the agencies to identify outreach efforts, such as public meetings, to encourage public involvement in establishing recreation fee areas and to annually post notices at each recreation fee area describing the use or anticipated use of recreation fees collected at that site during the previous year. Some of the agencies, including NPS, BLM, and FS, have issued agency-specific guidance for meeting REA public participation requirements. According to agency officials, the public participation requirements have delayed fee changes or the establishment of new fees at some units. Agency officials at almost 17 percent of fee-collecting units within all four agencies responding to our survey said that the public participation process in general had delayed or prevented the establishment of or changes to recreation fees at their unit to a moderate, great, or very great extent since the passage of REA in December 2004. The Interagency Pass working group has mainly focused on preparations for the new interagency “America the Beautiful—the National Parks and Federal Recreational Lands Pass.” While the agencies have made progress in preparing to implement the new pass, some issues remain unresolved. For example, while the working group has generally determined how revenues from passes sold centrally will be distributed for the first 3 to 5 years, it is unclear how these revenues will be distributed among all participating agencies beyond this time frame. In addition, the working group has not determined the price to charge for the new pass. According to DOI, the most complex and time-consuming aspect of implementing REA relates to establishing this new pass. The Interagency Pass working group has been addressing the various issues involved with the pass, including the price of the pass, the distribution of revenues from the sale of the pass, and operational issues like accepting the pass and tracking its use at recreation sites. The target date for implementing the new pass is January 1, 2007, with passes available for distribution by November 1, 2006. According to the working group, this is a very tight time line that will require the contracting processes to stay on schedule and for subsequent design, production, and shipping deadlines to be met. The standard version of the new pass will be available to the general public; in addition, there will be versions of the pass available to senior citizens, persons with disabilities, and volunteers. Table 2 provides a description of each of the versions of the new pass. The price of the standard pass has not yet been determined. The Golden Eagle, Age, and Access Passports, and the National Parks Pass will continue to be sold until the new interagency passes are available and all existing passes will be valid for the lifetime of the pass (e.g., 1 year from purchase for National Parks Pass and Golden Eagle Passport; lifetime of the pass holder for Golden Age and Access Passports). While the price of the new standard pass has yet to be determined as of August 2006, the pricing decision is critically important because of the potential impact of the pass on entrance and standard amenity fee revenues. In particular, agency officials at NPS have emphasized the importance of pricing and marketing decisions and their potential impacts on entrance fee revenue. To provide information to help determine the price of the new pass, the agencies entered into a cooperative agreement with the University of Wyoming to conduct a pricing analysis. For the study, researchers conducted six focus groups throughout the nation, collected benchmarking information from a number of U.S. state parks and Canadian national parks, and developed and implemented a random telephone survey of recreation users. According to a NPS headquarters official, the working group is not considering potential revenue losses due to the new interagency pass, only what the public is willing to pay for the new pass. However, in commenting on a draft of this report, NPS headquarters officials informed us that revenue impacts will be considered in the pricing decision. According to a DOI official, the price of the new pass was to be determined in the summer of 2006. As of August 2006, the price of the new standard (annual) pass had not yet been established. The details of the plan for distributing revenues from the sale of new interagency passes sold centrally, such as through the Internet or outside vendors, beyond the first 3 to 5 years of the pass program is still uncertain. All pass revenue from passes sold at units will remain within the agency where the pass was sold, and it will be up to each agency to determine how to redistribute pass revenues within the agency. For the first 3 to 5 years of the pass program, revenues from passes sold centrally will initially be used to cover administrative costs of the new pass and to reimburse NPS for the almost $2.4 million it loaned to fund development of the new pass. After administrative costs for the new pass are covered and NPS is reimbursed, any remaining central pass revenues will be distributed equally among all participating agencies for at least the first 3 to 5 years of the program, with the goal of assisting all agencies in establishing a pass program. However, this plan may be revisited if central pass sales significantly increase or decrease or if central pass revenue after 3 years is not adequate to cover administrative costs of the program or to reimburse NPS for its loan. The long-term plan for revenue distribution beyond the initial 3 to 5 years is more uncertain because these plan details have not been agreed upon. According to an official from the working group, the current long-term plan is to distribute central pass revenues to the agencies based on a formula that takes into account pass use, where passes were purchased, and possible additional factors. However, the details of the formula have not been determined, and there are some potential problems with the collection of pass-use data to be used in the formula. While units are generally able to track the number of passes sold, it would be difficult for many units to collect accurate data about use of the pass. At most NPS and FWS sites, fees covered by the new interagency pass will generally be collected at staffed entry points, whereas at BLM and FS sites, fees covered by these new passes will generally be collected at unstaffed and often remote locations where fee compliance and enforcement will be irregular and infrequent. One way to track pass usage would be to swipe a magnetic strip on the passes at recreation site entry gates. However, even within NPS, whose sites frequently have staffed entry points, only one-third of the sites with entrance fees are currently capable of reading magnetic strips at their entry gates. It would likely be difficult and expensive to install technology to read magnetic strips at many remote and unstaffed units, and compliance with such systems would be difficult to enforce at sites without staffed entry booths by January 2007. According to a member of the working group, the working group is aware of these issues and, while it has not yet addressed them, the group plans to develop a consistent data collection strategy that the agencies can use at unstaffed locations to determine pass usage. Agencies will be responsible for implementing the strategy and units will be expected to collect data on the use of the pass after the new interagency pass is released. As of August 2006, the agencies are engaged in a contracting process to acquire the goods and services necessary to implement the new interagency pass and are planning to issue the pass by the January 2007 target date. A Request for Proposal (RFP) for the contract was published on June 5, 2006, and, according to agency officials, it is unknown when the contract will be awarded. The U.S. Government Printing Office (GPO) will print the new pass and any accompanying products. Agency officials from the Interagency Pass working group have acknowledged that they are working within a very tight time line, but have said that they are committed to issuing the new pass by January 2007. However, certain critical aspects in the pass development time line have taken much longer than originally anticipated. For example, an earlier estimated date for issuing the RFP for contracting services was fall 2005 before it was pushed back several times and finally published in June 2006. In addition, interagency guidelines for the new pass that were estimated in June 2005 to be completed in fall 2005 had not been completed as of August 2006. However, the working group still has several months to meet their target pass implementation date of January 2007. One goal of the new single interagency pass is to reduce visitor confusion over which passes can be accepted where, since the various passes currently offered by the agencies create considerable confusion among the visiting public. The majority of units responding to our survey, almost 63 percent, were aware that the visiting public was confused about the use of current national passes, regional passes, or annual passes. The factor most frequently cited for causing visitor confusion was where the different types of passes are accepted, with 82 percent of units responding that this factor causes confusion to a moderate, great, or very great extent. Other factors cited by more than two-thirds of survey respondents as causing confusion were the differences in the benefits between passes, the recreation uses covered by each pass, the differences in the Golden Eagle Passport versus the National Parks Pass, the difference between federal and nonfederal units, and understanding the differences between various passes (e.g., eligibility, cost, benefits, etc.). Given that there will be overlap between the current National Parks Pass, the Golden Eagle, Age, and Access Passports, and the new interagency pass, it will be important for the new pass guidelines and agency-specific guidance and training on it to address these issues and provide unit staff with materials and information to better educate the public. The Fee Collection/Expenditure working group was established to address organizational concerns, implementation issues, and coordination among the agencies as they relate to fee collections and expenditures. While the agencies individually took steps after the enactment of REA to assess their recreation fee programs and begin implementing the new act, the working group’s main task was to develop common definitions and policy guidance to establish a basis for consistent implementation of REA and common reporting by each of the agencies. This working group finalized an interagency handbook with common definitions and guidance—the Interagency Implementation Handbook for Federal Lands Recreation Enhancement Act—in March 2006. The interagency handbook provided definitions for some of the terms used in the law, such as “designated developed parking,” “permanent trash receptacle,” “reasonable visitor protection,” and “special recreation permit fees” in order to clarify terms that may be interpreted differently by the various agencies. In addition to the definitions, the handbook provided general policy guidance regarding certain aspects of the law—such as overall guidance on some aspects of the new interagency pass and annual reporting of budgetary information—while delegating the authority to develop and implement policies on other issues to the individual agencies. For example, the handbook directed the agencies to develop and implement a policy for revenue distribution decisions, including retention of recreation fee revenues and agencywide distribution of funds. For the sections of REA that were delegated to the individual agencies, the handbook directed the agencies to develop written policy guidance that incorporates the standard definitions and policy guidelines. According to a working group official, the Fee Collection/Expenditure working group is no longer formally meeting since developing the interagency handbook was the group’s main task, and the handbook has now been finalized. The Communications working group was formed to facilitate interagency communications about REA implementation issues with Congress, the public, and other interested third parties, such as states and localities. The working group organized listening sessions to gain public input on the RRACs and the new interagency pass. The agencies have periodically briefed congressional staffers on a variety of issues, including the Federal Lands Recreation Enhancement Act First Triennial Report to Congress; Fiscal Year 2006, which was released in May 2006. According to agency officials, the working group now meets infrequently and has not issued any joint press releases to the public since all press releases regarding REA have thus far been issued by individual agencies. After the passage of REA, agencies directed their units to assess and modify their fee programs to comply with REA criteria. Although most units have made some modifications to their programs, such as converting fees, eliminating sites and fees, or adding amenities, some units are still in transition and may still need to add required amenities. Some responding units, however, reported collecting standard amenity fees, without having all six amenities required under REA. In commenting on a draft of this report, agency officials said many of these survey responses were in error. Although Reclamation was included as a participating agency under REA, it has yet to make a final decision about whether to implement REA. Also, most BLM, FWS, NPS, and FS units reported that some kind of guidance is available; however, the agencies have not yet issued final guidance, and many unit officials indicated that some aspects of the law are unclear and that they need more specific guidance on how to add new fee sites or modify existing fees to fully implement the law. To implement REA, participating agencies reviewed their recreation fees under the former Fee Demo program and other legal authorities and instructed units to make necessary modifications to ensure compliance with key REA provisions. While most units converted fees, eliminated fees, or added amenities to comply with REA, some are still transitioning toward taking such actions and, in some cases, are charging fees without having all of their required amenities. One agency, Reclamation, has assessed its recreation fees but has not decided whether it will implement REA. In 2005, all agencies assessed existing fee programs to determine whether existing fee collecting sites met REA requirements, and some units made modifications to comply with REA. Overall, the transition from Fee Demo to REA was easiest for NPS and FWS, both of which charged entrance fees under Fee Demo, were authorized to charge such fees under REA, and continued to charge entrance fees. Therefore, the transition for these agencies to REA did not have much impact. NPS eliminated a day-use fee at the Exit Glacier site in Kenai Fjords National Park in Alaska because of concerns that it would be perceived as an entrance fee, which is prohibited under both the Alaska National Interest Lands Conservation Act and REA. FWS eliminated an entrance fee at Gavin’s Point National Fish Hatchery in South Dakota because fish hatcheries are not allowed to charge entrance fees under REA. The transition from Fee Demo to REA had more of an impact on FS and BLM since REA provided additional criteria for fee sites and prohibitions on certain fees at these agencies. Unlike Fee Demo, REA limits the authority of BLM and FS, authorizing these agencies to collect fees only at locations with a certain level of infrastructure and/or services and prohibits charging fees for parking, general access to dispersed areas with little or no investment, and scenic overlooks, among others. BLM and FS assessed existing fee programs and either eliminated fees, converted fees, or added amenities in order to convert entrance or day-use fees to standard amenity fees. BLM and FS also assessed existing campgrounds and other developed facilities to ensure that they had at least the minimum number of required amenities to charge an expanded amenity fee. BLM eliminated several fees after passage of REA, including fees for overlooks at Imperial Sand Dunes in California, fees at undeveloped sites at Orilla Verde Recreation Area in New Mexico, and youth fees at several sites, including Cape Blanco Lighthouse in Oregon. For BLM, a key change was converting existing entrance fees to standard amenity fees where sites met the new criteria. According to a BLM headquarters official, BLM converted entrance fees at 10 sites to standard amenity fees. According to state coordinators, only one of these sites, located in Arizona, did not meet standard amenity criteria and had to add an informational kiosk. Other BLM sites converted various fees charged for activities such as camping to expanded amenity fees. For example, campgrounds at Fisherman’s Bend Recreation Area in western Oregon had at least the minimum amenities required by REA to convert a camping fee to an expanded amenity fee. FS reviewed its existing recreation fees and stated that it dropped 437 sites, such as trailheads and picnic areas, from its fee program because they did not meet the new criteria described under REA. Under Fee Demo, FS charged fees for entrance into large areas, sometimes entire forests. However, REA prohibited FS from charging entrance fees and only allowed FS to charge standard amenity fees if the sites provide the required level of amenities. In addition to dropping fee sites, numerous FS units added amenities to bring sites into compliance with REA. According to one FS regional coordinator, if a developed site was missing one or two amenities, then the unit added those amenities, otherwise, the site was dropped from the fee program. Concerns about FS compliance with REA criteria have been raised by users who are critical of the use of High Impact Recreational Area (HIRA) designations and standard amenity fee areas. While HIRAs are not specifically mentioned in REA, FS relies on a section of REA that authorizes standard amenity charges for the use of “an area” as authority to designate HIRAs. The Interagency Implementation Handbook for Federal Lands Recreation Enhancement Act defines a HIRA as an area of concentrated recreation use that includes a variety of developed sites providing a similar recreation opportunity that incur significant expenditures for restoration, public safety, sanitation facilities, education, maintenance, and other activities necessary to protect the health and safety of visitors, cultural resources, and the natural environment. The handbook also defines limitations on which areas can be designated as a HIRA. For example, whole administrative units, such as a national forest or a Reclamation project, cannot be declared a HIRA. During the past few years, FS identified HIRA sites and has proceeded to charge standard amenity fees for the use of these areas under REA. According to the agency’s officials, the HIRA designation is a logical way of categorizing amenities supporting high levels of recreation use, and collected fees go to maintain and clean these provided amenities, such as restroom facilities. Another concern about the HIRAs is that some access points into parts of wilderness areas that are not considered part of a HIRA are only accessible via the HIRA, so visitors must still pay the standard amenity fee to access these parts of the national forests. In addition, some assert that because REA prohibits charging a fee “solely for parking” or “driving through, walking through, boating through, horseback riding through, or hiking through…without using the facilities and services,” the standard amenity fees for HIRAs are prohibited in some cases. For example, a visitor to an Arizona national forest challenged FS citations issued to her for failing to display the required day pass permit to travel into a HIRA. The visitor was cited on two occasions because she parked within a HIRA to hike the area without having paid for the day pass permit. On September 5, 2006, a district court held that the REA bars the FS from collecting fees for parking along roads or trailsides and that the FS acted “far beyond its legislative authority” in its attempt to collect the fee. Accordingly, the court dismissed the citations against the visitor. According to FS officials, the agency significantly decreased the size of many of its HIRAs to only cover areas where required standard amenities are within reasonable access. For example, the entire Flaming Gorge National Recreation Area in Utah and Wyoming had an entrance fee under Fee Demo; now only 4 percent of the recreation area is subject to fees. Another example is the Los Padres National Forest in southern California, which reportedly decreased the size of its HIRA from almost 1.5 million acres to 71,000 acres while also removing 37 fee sites. However, in testimony before the Senate Committee on Energy and Natural Resources, Subcommittee on Public Lands and Forests, on October 26, 2005, representatives from the Arizona and Western Slope No-Fee Coalitions charged that the BLM and FS are using the HIRA and standard amenity concepts to circumvent the intent of Congress and charge fees for areas that do not have the amenities required by REA. However, REA does not provide a definition for “area” and thus the criteria used to define an “area” are open to the agencies’ discretion. For example, the Arapaho National Recreation Area in Colorado charges a standard amenity fee for an area it defines as an HIRA that contains 25 developed sites including picnic areas, boat launches, campgrounds, and trailheads. Not all six of the amenities that are required under REA are collocated at each of the developed sites. However, since all six of the required amenities are somewhere within the hundreds of acres of their designated HIRA, the FS is charging a standard amenity fee for the entire area under REA. In the October 26, 2005, Senate subcommittee hearing, a USDA official acknowledged that FS implementation of REA is a “work in progress” and that different local conditions and characteristics make it difficult to develop HIRA criteria that fit all circumstances. According to this official, FS has continued to work on providing consistent signage and to identify areas that may not meet the criteria for charging fees and plans to have the RRACs comment on how the agency is applying HIRA criteria. We also found that some BLM and FS units still do not meet REA requirements for charging standard amenity fees. Based on the results of our survey, of the 195 BLM and FS units that reported that they charge a standard amenity fee, 38 reported they did not provide all six amenities that are required for them to charge the fee. Two BLM units and 36 FS units reported that they did not provide all six required amenities. The amenities that the units were most frequently lacking were a permanent trash receptacle and interpretive signs, exhibits, or kiosks. Although these units reported in survey responses that their unit did not have all six required amenities, BLM state-level officials and FS headquarters officials stated they believed all of their fee-collecting units were in compliance with REA criteria. In commenting on a draft of this report, both BLM and FS indicated their unit officials had likely been confused by the fee terminology in the survey question and/or may have misunderstood the definitions of the required amenities, rather than because these units lack amenities such as picnic tables. However, during interviews with agency officials, we learned that some units charging a standard amenity fee did not have all six required amenities, but had plans to add these amenities. For example, the Meadow Creek site at the Arapaho-Roosevelt National Forest in Colorado lacked two of the six amenities—picnic tables and interpretative signage— required under REA when we visited it in December 2005. The unit has continued to charge a standard amenity fee since REA passed because unit officials thought it would be confusing to visitors to temporarily discontinue the fee while they worked on upgrading the area to meet REA criteria. The unit received a $20,000 grant in 2005 from the central fee revenue fund to add picnic tables and signage, as well as fire rings to the area. According to a unit official, the required amenities were added during the summer of 2006, and the Meadow Creek site now has all of the required amenities in place. Some FS unit staff also found the standard amenity criteria at odds with wildlife management practices. For example, several national forests near the Canadian border are in grizzly bear areas, so FS has instructed the public to “pack out,” or dispose of their trash outside of camping and day- use areas, rather than install costly bear-proof garbage cans. Now, if these forests are going to continue charging recreation fees at these sites, REA requires FS to put trash receptacles in the areas. In another example, picnic tables were previously removed from Mt. Evans, in the Arapaho-Roosevelt National Forest, because of wildlife interaction issues. However, in order to comply with REA, FS must provide all six required amenities, including picnic tables. In commenting on a draft of this report, BLM headquarters officials stated that they checked with the two units that reported having less than six required amenities in their response to our survey. The officials determined that the two units’ reports were in error and that the units did offer all six amenities. Similarly, the FS headquarters staff made further inquiries of the 36 units that reported less than the six required amenities and determined that some of the information that the units reported on their survey response was in error. Based on information from FS officials and our analysis, the status of those units is as follows: 12 units did not have a standard amenity fee but instead had an expanded amenity fee, which does not have the same amenity requirements under REA. 11 units did have the required six amenities and did not accurately report this in their survey response to us. 4 units had a standard amenity fee for a visitor or interpretive center, which under REA may be charged without having the six required amenities. 2 units had no standard amenity fee and should have reported this in their survey response to us. 7 units have not yet responded to the follow up inquiries. It should be noted, however, that the results of BLM and FS headquarters officials’ inquiries have not been verified. Reclamation has not made a decision to move forward with REA implementation. The agency officials are assessing Office of the Solicitor advice concerning how the act applies to their operational situation and to the alternate authority for Reclamation to charge fees under the Federal Water Project Recreation Act (FWPRA). Reclamation had requested advice from the solicitor’s office because of their unusual operational situation that includes the management of about 250 of Reclamation’s approximately 300 sites by partner organizations, such as other government entities. In 2005, Reclamation conducted an assessment to determine which of its recreation sites met REA requirements. Reclamation identified 7 of the 50 sites it directly manages that would qualify to charge standard amenity fees under REA, one of which was New Melones Reservoir in California. New Melones collected about $170,000 in 2004 under LWCFA, which was repealed by REA. Reclamation is now using FWPRA as its authority to collect recreation fees at New Melones. Any fees collected under FWPRA are to be deposited into a Department of the Treasury (Treasury) account, unless project specific legislation provides otherwise. Reclamation has not indicated how many of the 50 sites they directly manage meet REA criteria for charging an expanded amenity fee. After REA passed, the Interagency Implementation Handbook directed agencies to develop written policy guidance that incorporates the standard definitions and overarching policy guidelines established in the handbook. Although agencies reported that they made the transition from Fee Demo to REA without major problems, many units said that some aspects of REA are unclear, and more specific guidance is needed. For example, some unit officials expressed confusion about how to add new fees or modify existing fees, while others expressed confusion about amenity criteria. BLM and FS issued interim guidance documents, and the NPS has issued memos and provided training on REA implementation, while FWS has issued no formal guidance to the field. BLM and FS issued interim recreation fee guidelines within months after the passage of REA, and both have since issued additional guidance on different aspects of the law. NPS issued transitional guidelines and memos on various aspects of REA and has provided training on REA implementation. FWS formed a working group with representatives from headquarters and the field to work on various implementation tasks, including drafting guidance and policy on REA. According to an FWS official, interim guidance will be out by the end of fiscal year 2006. Since Reclamation has not yet determined whether the agency will implement REA, the agency has not issued any guidance on the new law. While most respondents to our survey indicated that some type of guidance on the fee program is available, many unit and regional officials indicated during interviews that additional guidance is needed. Based on the results of our survey, most units responding indicated that some kind of guidance is available from national headquarters and a regional or state office, with the majority of units indicating that the existing guidance is at least moderately useful on authorized types of fees and passes. For example, 85 percent of BLM, FS, FWS and NPS units reported that written guidance is available from national headquarters. Most units also indicated that unwritten, unit-specific guidance, staff knowledge, and experience are additional sources of guidance that are generally available to them. However, although the vast majority of survey respondents reported that some kind of written guidance was available, unit officials at the state and regional level, as well as at some of the sites we visited, emphasized that more specific guidance is needed, including detailed policy and procedures for implementing and managing fee programs. For example, as BLM and FS unit staffs have implemented REA, some unit agency officials have found REA amenity criteria and terminology ambiguous, and some units expressed confusion about how to interpret and apply such criteria as “reasonable security” and “permanent trash receptacle.” Other unit officials at the various agencies said they needed more guidance on how to add new fee sites or modify existing fees. For example, according to an FWS official, the main obstacle to implementing fees at a refuge complex in Nevada has been a lack of policies and procedures, as well as basic guidance, on how to implement a fee program. According to FWS officials, such guidance should include examples of implementation plans, information on how to set up accounts, effective ways to share lessons learned among the seven FWS regions, and contact information for other agency officials with fee program experience. We found that some agencies’ units did not have adequate controls for safeguarding and accounting for collected fee revenues. While current federal guidance requires managers to establish and maintain accounting systems that incorporate effective internal controls, we determined that some BLM, FWS, and FS units did not have sufficient guidance—including examples of best practices—to follow for implementing internal controls over collected fee revenues. NPS has also been slow to issue updated guidance on accounting for and controlling collected fee revenues. However, despite this lack of guidance, NPS units we visited appear to have generally implemented effective internal controls. Furthermore, routine audits are an integral part of any system of effective internal controls over agencies’ financial assets. However, less than 37 percent of respondents to our survey indicated their units have been examined by auditors since October 2000. Without effective internal controls, the units cannot provide reasonable assurance that the fee revenues collected are properly controlled and accounted for. Federal internal control standards require management to identify risks that could impair the safeguarding of agency resources, such as fee revenues at the unit level, and suggest that management should formulate an approach for risk management that identifies the internal controls necessary to mitigate those risks. A good set of internal controls should incorporate physical control over vulnerable assets—such as cash—with other controls such as segregation of duties, controls over information processing, accurate and timely recording of transactions and events, and access restrictions to and accountability for resources and records. However, cash collection is an area where agencies are particularly vulnerable to the risk of theft. Some locations, such as BLM’s Gunnison Field Office, have such limited staff running their recreation fee program that their program coordinator indicated the appropriate separation of duties, not to mention using procedures such as two staff jointly counting fee receipts, is simply not possible. Unfortunately, this circumstance may not be unusual, especially at smaller units where resource management staffs—generally with little or no accounting or business operations experience—are tasked with implementing the fee program, including cash handling procedures. The staffs at these units face many challenges ranging from the development of safe and secure procedures for gathering and transporting fee envelopes from remote campground sites to assuring that staff with appropriate knowledge and skills are assigned to process and account for collected fees. In addition, survey respondents indicated a myriad of other problems such as security concerns over the delivery of collected cash fees from their unit local banks not accepting agency procedures for depositing funds to a local banks and/or post offices charging fees for issuing the money order or cashier’s check necessary to make deposits in Treasury accounts, employees having to pay bank fees for money orders or cashiers checks with their own funds and then seek reimbursement from the agency, and the closest local bank sometimes being an inconvenient 30 to 60 miles away from fee collecting locations. According to federal internal control standards, management should strive to remove the temptation for unethical behavior by avoiding the receiving and handling of cash by individual staff without a reasonable means of determining the amount of revenues the employee has received. For example, at the Tonto National Forest Mesa Ranger District, near Phoenix, staff members sometimes collect cash fees directly from visitors when the automated fee machines are broken. Most of the district’s fees are collected by automated machines that are owned and serviced by a contractor. However, one or more of the unit’s automated fee machines are often broken. To avoid a loss of revenues when the machines are not working, the managers designate staff members as collection officers to work at busy entry points to collect fees and direct traffic flow. According to district management, the staff later feed the collected fees into a working automated machine someplace else in the district. However, the managers have not developed physical or other compensating controls over these cash collections (easily amounting to several hundred dollars on a busy day) that would enable managers to verify that all of the fees collected by any given staff member are actually fed into a working machine. In commenting on a draft of this report, FS headquarters officials indicated that they believe that automated fee machines should rarely be broken and they also noted that local officials are responsible for reasonable internal controls over cash collection. In addition, the safety of staff involved in collecting the cash could be jeopardized due to the risk of being targeted for robbery. In another example, at BLM’s Gunnison Field Office, in western Colorado, one or two staff members collect the fee envelopes containing campground fees from a remote self-service fee station and place the envelopes into a bag for transport back to the office. At the office, the envelopes are placed in a safe until another employee has an opportunity to open the envelopes, count the cash, and record the fees collected. However, the manager has not developed physical controls over the cash collections and accounting to provide assurance that all of the fee envelopes collected by the first staff member(s) are turned in at the office or that all of the funds counted by the second employee are deposited and accurately documented. Consequently, in both of these examples, the managers were left without reasonable assurance that the revenue each employee collected was received and accounted for by the agency. Most available agency guidance provides overall objectives for establishing and maintaining an effective accounting system. For example, the FS Manual on Accounting states that one of the overall objectives is to “establish and maintain an accounting system that provides: A system for internal control and accountability of funds, property, and other assets from acquisition to disposition.” However, this guidance does not provide the detailed, “cook book” type of instructions most unit-level fee program managers need to successfully implement an effective system of internal controls. In contrast, Yosemite National Park’s written opening procedures provide detailed step by step instructions as follows: Check the accountable stock ; verify that the numbers are in sequence. Make note of any missing passes. Enter the first and last number of each type of pass on the shift report. Date and initial the shift report. According to several unit-level officials we interviewed, agency-level support and training on accounting and control issues is needed to help units develop this type of detailed procedures for their fee programs. Some field staffs have also requested training opportunities to help them learn how they should manage their fee programs. The lack of both written procedures that are current and comprehensive and fee program training are obstacles to developing successful internal controls. Due to the numerous comments shared by agency staff about the need for updated guidance, we included questions about this issue in our nationwide survey of BLM, FWS, NPS, and FS units. Of those units that reported receiving some sort of guidance related to controlling and accounting for collected fees, over one-third (277/752) indicated the guidance they received was less than moderately useful. When asked about whether staff had been provided training on controlling and accounting for collected funds, over 40 percent indicated they had not received training on this issue. Of the survey respondents who did receive training, over 60 percent indicated the training was less than moderately useful. In commenting on a draft of this report, FS acknowledged the need for revising the Forest Service Manual and indicated it will expedite publication of the handbook and updated procedures as soon as practicable. Although NPS units we visited appear to have implemented reasonable accounting procedures and effective internal controls, the agency has been slow to issue updated guidance on accounting and controlling collected fee revenues. NPS parks are still following NPS-22, the 1989 NPS policy for fee collection. However, technologies have changed so much since 1989 that the old policy does not even address issues such as electronic processing of credit card payments. The parks have been waiting for years for a new fee collection policy to be issued, and several unit and regional officials stated that the revised policy guidance is needed immediately. NPS management indicated they had developed a draft of the new policy when REA passed in late 2004, making portions of the previous draft obsolete. NPS fee program coordinators in the headquarters office said they recognize that units need and want updated guidance and, although they are trying hard to get the guidance on recreation fees out as soon as possible, could not provide an estimated time frame for issuance. Some units’ fee coordinators, such as the coordinators at Rocky Mountain National Park in Colorado and the Shasta Trinity National Forest Shasta Lake Ranger District in California appear to have a good handle on how to develop and implement sound financial and accounting internal controls. However, many other units lacked both the technical and professional expertise to develop sound procedures without detailed guidance. Since many unit-level staffs have not received detailed agency guidance that would be useful in establishing such procedures on their own, they continue to struggle with these issues and the risks associated with poor internal controls. Many units have not implemented a system of routine audits to help ensure that fees are collected and used as authorized and that collected funds are safeguarded. Only 37 percent of the 752 units responding to this question in our survey reported having their fee collection program examined by an auditor since October 2000. The percentage of units having their fee collection programs examined varies significantly by agency. For example, NPS reported the highest percentage of audits of unit-level fee programs with about 63 percent of units (110/175) having their control and accounting procedures examined since October 2000. According to a NPS regional fee program coordinator, some NPS regions are aggressive about audits, such as the Intermountain Region where one staff person is dedicated to conducting audits. Other regions may not have dedicated resources to conduct audits. For example, the Northeast Region has only one fee coordinator available to conduct fee program audits, and she does not feel she is justified in going to parks unless unit managers ask her to review how the unit is doing operationally. In the past, the NPS headquarters fee project coordinator reportedly proposed using a portion of the centrally held recreation fees to fund a national audit program, but the proposal was only partially implemented in one region. In commenting on a draft of this report, NPS stated its intention to reconvene a workgroup to develop a National Audit Program. Other agencies reported having many fewer routine audits of their programs: only 14 percent of FWS units, 27 percent of BLM units, and 33 percent of FS units reported having examinations. According to some unit officials with whom we spoke, they either did not believe they have access to internal or external audit resources or they rationalized that they did not need to implement an audit program since they had trustworthy staff. A lack of staff resources is also a factor in the limited number of units that have had their recreation fee programs audited during the past 5 years. Routine audits are an important internal control that could allow agency officials to promptly detect unauthorized transactions involving recreation fee revenues and assess the design, implementation, and effectiveness of controls over these assets agencywide. One example that highlights the need for routine audits was at the Tonto National Forest Mesa Ranger District, where officials acknowledged that no audit had been conducted on the contractor who maintains the automated machines and processes the fees collected through the machines. In fact, the district officials said they had seen no reason to request that the contractor, who owns and services the automated fee machines, be audited. The contractor collects the fees (cash and credit card payments) directly from the machines and then prepares quarterly reports for the FS unit, stating the amount collected and the amount to be remitted to FS under the contract. Over the life of the contract, FS staff members have verified the amount of the contractor’s remittance against the reported total collected fees to ensure the contractor submitted the correct percentage of the fees under their contract. Unfortunately, by simply relying on this approach, FS officials have no way of independently verifying actual receipts because they have no access to raw data from the automated machines. FS was simply verifying the contractor’s mathematical calculation against what the contractor had self-reported as total fee receipts. In commenting on a draft of this report, FS officials noted that it is FS policy to audit collection officers at least annually, but acknowledged they have not been meeting this goal. In order to begin addressing FS’s recognized shortfall in meeting their prescribed audit program, they have assigned a full-time FS Albuquerque Service Center (ASC) resource to monitor the program, nationwide. Also, according to a FWS headquarters official, in fiscal year 2004, FWS implemented a procedure to help target units for visitor service reviews. While REA establishes the basic priority of using recreation fee revenues for enhancing visitors’ experience, each agency has a different process for selecting projects to be funded with fee revenues based on the agency’s needs and revised policies under REA. These different processes affect the types of projects the agencies fund and their time lines for project implementation. Agencies fund a wide variety of priority projects with fee revenues, typically maintenance, operations, and some capital improvements. Examples of projects and activities funded with fee revenue include campground renovations within American Fork Canyon at the Uinta National Forest in Utah, interpretive panels at Colonial National Historic Park in Virginia as pictured in figure 1, interpretive staff at BLM’s Red Rock Canyon National Conservation Area in Nevada, and trail work at FWS’s Rocky Mountain Arsenal National Wildlife Refuge in Colorado as pictured in figure 2. Some units also use recreation fee revenues to leverage funds received from other sources, such as grants or donations. REA established limits on the use of recreation fees to focus the expenditures more directly on benefiting the people who visit the unit at which they were collected. For example, REA supports the use of recreation fees to repair, maintain, and enhance facilities related directly to visitor enjoyment, visitor access, and visitor health and safety but restricts the use of recreation fees for biological monitoring under the Endangered Species Act or for employee bonuses. It also limits the use of fee revenues to not more than an average of 15 percent of total revenues for administration, overhead, and indirect costs related to the recreation fee program. Other sanctioned uses of recreation fee revenues include a myriad of things ranging from interpretive signage to law enforcement to certain limited types of habitat restoration. Specifically, REA mandates that fee revenues only be used for the following: repair, maintenance, and facility enhancement related directly to visitor enjoyment, visitor access, and health and safety; interpretation, visitor information, visitor service, visitor needs assessments, and signs; habitat restoration directly related to wildlife-dependent recreation that is limited to hunting, fishing, wildlife observation, or photography; law enforcement related to public use and recreation; direct operating or capital costs associated with the recreation fee a fee management agreement or a visitor reservation service. In addition to REA guidance, BLM, NPS, and FS have all issued at least interim guidance on expenditure priorities for projects funded with fee revenues. BLM guidance emphasizes that fee revenues be used to support projects or activities related to recreation and stipulates a specific percentage of funding be spent in this area. NPS has established deferred maintenance projects as its first priority for recreation fee revenues and stipulates the percentage of funding that should be spent in support of this. FS guidance essentially repeats the priorities established in REA. FWS has not issued any interim guidance on expenditure priorities, but draft guidance that has not been finalized also repeats the priorities established in REA, similar to FS guidance. Each of the agencies’ guidance also stipulates the amount of fee revenues that can be spent for either (1) administration, overhead, and indirect costs or (2) collections cost. Table 3 shows the guidance developed by the agencies for how recreational fee revenues should be spent. While REA establishes the basic priority of using recreation fee revenues for enhancing visitors’ experience, each agency has a different process for selecting projects to be funded with fee revenues based on the agency’s needs and policies revised under REA. These different processes can affect the types of projects agencies fund and their time lines for project implementation. At BLM, FWS, and FS, most proposed projects are approved at the local unit level. Unit staff indicated that most projects funded with fee revenues are usually approved within a couple of days to a few weeks or, in some cases, implemented immediately without unit manager approval. At NPS, however, projects must be reviewed and approved at the unit and regional levels, as well as at the headquarters or department level before projects are funded. In commenting on a draft of this report, NPS noted that its project approval process was put in place by DOI and the Office of Management and Budget and has been articulated in congressional appropriations report language. BLM, FWS, and FS project approvals generally occur at the local unit level. The initial project suggestions are typically generated by local unit staff members who have identified a need that could be filled with fee revenues. In BLM and FWS units, it is generally a field office or refuge manager that approves proposed projects. For example, at BLM’s Upper Colorado River unit, ideas for fee projects are suggested, discussed, and agreed upon by unit staff members and the field office Manager has final approval on all recreation fee projects. This was also the case at FWS’s Back Bay National Wildlife Refuge, where unit staff members suggest and jointly prioritize fee projects, while the refuge Manager has final approval. Similarly, the FS Manager of a ranger district may decide on projects or, in some cases, the projects are reviewed at a higher level—by the Forest Supervisor or regional office. At some FS units, a fee board reviews and approves proposed projects. For example, at the Shasta-Trinity National Recreation Area within the Shasta-Trinity National Forest in California, any employee may propose a fee project, which must be presented to the Recreation Area’s fee board for approval. Suggestions for projects within NPS are also typically generated by local unit staff, except this is only the first of several steps in an often time- consuming NPS project approval process. NPS project requests are entered into the Project Management Information System (PMIS) by unit staff in advance of regional and NPS headquarters—the Washington Office (WASO)—project call due dates for prioritization by the park management team, with approval at the park level. After the units submit their project proposals, the regional official(s) review the project proposals/requests and generally either approve the proposals or mark them for edits. According to one regional official, a regional reviewer may occasionally reject a proposal if the project does not comply with established criteria or if the requesting unit did not meet their deferred maintenance goal; however, most projects are forwarded to WASO for approval. In commenting on a draft of this report, NPS headquarters informed us that on average, a Fee Demo project remains at the region or park level for 3 years as the data and information are edited and updated. Those projects that do not have accurate and complete data in PMIS are delayed in the approval process at all levels. The project approval process at WASO was put in place by DOI and NPS to improve accountability. This process is managed by the NPS Headquarters Park Facilities Management Division to provide review for consistency to established policies. According to a Facility Management Specialist within this division, project approval depends on the dollar amount of the project because NPS’ Development Advisory Board, DOI, Congress, and the Office of Management and Budget (OMB) all approve projects over certain dollar amounts. For example, the agency’s Development Advisory Board reviews and approves all projects over $500,000, and Congress approves projects over $500,000 and all projects over $100,000 if the money comes from the central fund. Meanwhile, DOI reviews all projects over $100,000, and regional and national projects are approved at the national level. The complexity of the approval process has required parks and regions to be proactive in getting projects into the process early. However, according to NPS officials, it can sometimes take 1 year or more to obtain approval to fund a project under this process. Many agency officials at the unit and regional levels expressed frustration about the length of time it takes to obtain approval for funding NPS projects, and some noted that the approval process has delayed project implementation and/or has contributed to units having unobligated fee revenue balances. For example, one park unit official noted in the survey that the lengthy approval process jeopardizes projects, especially partnership projects that may be time sensitive. However, others noted that the approval process can be expedited in emergency situations to enable project approval within a couple of months. According to some unit officials, part of the reason WASO approvals take so long is that parks’ priorities for fee revenue projects do not always match WASO priorities and, as a result, WASO may question a project’s appropriateness and delay or deny its approval, even if it is consistent with projects allowed by law or under NPS policy. In addition, while WASO officials sometimes contact regional officials to question or offer suggestions on a project that has not yet been approved, WASO will, in other cases, allow projects to remain in the system indefinitely without approval or disapproval, according to another agency official. NPS headquarters officials explained that the lack of accurate and complete data in PMIS is the primary reason for projects remaining in the system indefinitely and pointed to mistakes by the units and regions as the cause of this problem. According to a Facility Management Specialist, the agency is implementing a comprehensive plan approach under REA, which should help units and regions to better manage their projects through an advance 5-year planning process. According to this official, the Regional Directors can also approve projects estimated to cost under $500,000, but she still retains the authority to review these approved projects and related project data to ensure that projects funded are consistent with REA and to assure accountability. NPS headquarters officials stated that the 5-year plan of projects, which was first instituted in fiscal year 2003, requires parks to be strategic and proactive in submitting projects for approval, and to identify their sequential needs for compliance, design and planning prior to project execution. Recreational fee revenues are used by the agencies to fund a variety of maintenance, operations, visitor services, and some capital improvement projects. The specific types of activities or projects funded with these fees vary by agency. For example, in fiscal year 2005, NPS spent the majority of fees they collected under REA on various types of maintenance work, mostly focusing on deferred maintenance. Meanwhile, FS units spent about 40 percent of the fees collected under REA on maintenance, which included deferred maintenance, annual maintenance, and capital improvements. For example, recreation fee revenues at the Sequoia National Forest in California funded capital improvements including a new restroom (see fig. 3), paving of a parking lot, and the installation of trash receptacles, picnic tables, and grills at the Big Meadows Winter Trailhead, which is heavily used by snowmobile riders and skiers in the winter. While BLM and FWS also funded some maintenance work, they spent a large portion of their revenues on visitor services. BLM spent about 33 percent of their fee revenues on visitor services, such as increased seasonal staff to complete trail work and other projects and to help monitor and teach river safety along the Merced River. FWS also focused a lot of their resources on providing/enhancing visitor services, almost 44 percent of the total fees they collected under REA. For example, at Chincoteague National Wildlife Refuge in Virginia and Maryland, REA fee revenues have funded visitor services such as the design, development, and installation of interpretive exhibits along four separate trails. Some units are quite creative with their use of recreation fee revenues to fund fee projects. For example, agency officials at the Shasta-Trinity National Forest in northern California use recreation fee revenues to purchase materials to make “pack-out bags” that are given to mountain climbers to facilitate the removal of human waste from Mount Shasta. The bags help with resource protection since climbers are able to remove their waste using the bags rather than leaving it on the mountain, as was done prior to the inception of the program. Also at the Shasta-Trinity National Forest, recreation fees funded the lake directional signage on Shasta and Trinity Lakes pictured in figure 4. The lakes are quite large—Shasta Lake has about 420 miles of shoreline—so the signs improve visitor services by helping direct boaters to various locations on the lakes. At Rocky Mountain National Park in Colorado, recreation fee revenues have been used to fund campsite improvements, including new tent pads, fire rings, and picnic tables, as can be seen in figure 5. These improvements enhanced visitor services by improving the level of amenities while also protecting natural resources by containing visitor impacts. Recreation fee revenues at NPS’s Whiskeytown National Recreation Area in northern California were used to construct the universally accessible fishing piers pictured in figure 6, which have improved visitor services and are heavily used, according to the park Superintendent. Many units within various agencies have used recreation fee revenues to purchase and install improved restroom facilities, such as the one pictured earlier in figure 3 at Sequoia National Forest. Such restrooms improve visitor services while also enhancing resource protection, according to Sequoia’s Assistant Recreation Fee Coordinator. Many units, especially within BLM and FS, use fee revenue for daily site maintenance and operations and, while these activities may not be as visible as capital improvement projects such as new restrooms, officials noted they still provide valuable services to visitors. For example, at Desolation Canyon in Utah, which is managed by BLM’s Price Field Office, the main source of recreation fee revenue is rafting permits. The revenues are then primarily used to fund ranger staff who fulfill multiple roles including inspecting rafters’ equipment and permits, patrolling the waters, providing interpretive information to rafters, and maintaining the launch and take-out sites along the Green River. Another example of a unit that funds operations and maintenance activities with fee revenues is Blackwater National Wildlife Refuge in Virginia, where recreation fees fund restroom maintenance, including toilet pumping and supplies. At most units, a portion of fee revenues are also used to cover other operations, such as the cost of collecting fees. Finally, some units use recreation fee revenues to leverage funds received from other sources, such as grants or donations. For example, the Klamath Falls National Wildlife Refuge Complex on the California-Oregon border worked with a birding group to construct the universally accessible photo blind pictured in figure 7. The birding group provided funds to construct the handicapped accessible pathway leading to the blind, while FWS leveled the ground for the pathway and purchased materials to construct the photo blind with fee revenues. Another example is NPS’s Antietam National Battlefield, where recreation fee revenues were leveraged with other funds to restore a 106-year old monument located at the unit, see figure 8. The total cost of the project was $300,000—the unit’s largest fee project to date—with $255,000 of the project cost funded by recreation fee revenues and the remaining $45,000 leveraged from other sources, including a $31,000 donation from the state of Maryland, funds from the “Adopt-a-Monument Program,” and donations from a local newspaper. Recreation fees have been used to leverage grant funding at BLM’s Gunnison Field Office in Colorado, which received about $100,000 in grants in 2006. The interpretive panels pictured in figure 9 at American Basin, managed by the Gunnison Field Office, were partially funded with recreation fees. The collection and distribution of central and/or regional funds varies by agency and sometimes by region. Three of the participating agencies— NPS, FWS, and FS—have central or regional funds where a portion of fee revenues are deposited, as shown in table 4. The projects and activities funded with central or regional funds vary by agency and, in some cases, by region, but generally the central and regional funds are distributed among the units based on project proposals or are used to cover the administrative costs of the recreation fee program. For example, FWS Region 2, which has a 20 percent regional fund, uses a portion of its regional funds to cover administrative charges and distributes the remaining funds to refuges within the region based on submitted project proposals. Similarly, FS Region 5 uses a large portion of its 5 percent regional fund to cover fee program management costs, and special project expenditures, such as the RRAC start-up costs, and distributes a portion of the regional funds back to the units in the form of resource and internship grants. Within FWS and FS, the distribution of regional funds is generally determined at the regional level. At NPS, project proposals must be reviewed and approved at both the regional and WASO levels before central funds are distributed to the units. The four agencies collecting recreation fees under REA have accumulated unobligated balances of nearly $300 million dollars at the end of fiscal year 2005. These balances have accrued for several reasons that included their units’ plans to undertake large projects requiring them to have all required funds available before initiating the project, the need to carry over funds for the next season’s operations, and the lack of adequate staffing to administer and implement projects in a more timely fashion. Many agency sources believe that the recreation fees are to supplement and not replace funds from other appropriations, such as construction and operations. Despite this, the majority of officials at the units we surveyed indicated they believed to a moderate, great, or very great extent that recreation fee revenues are being used to fund projects formerly funded with other appropriations at their unit. In addition, the majority of agency officials told us they believe that they may need to replace appropriations with recreation fee revenues in the future. However, in commenting on a draft of this report, FS and DOI noted that historically, fee revenues have not replaced appropriations and there is no reason to expect this change in the future. According to the agencies’ recent report to Congress, BLM, FWS, FS, and NPS reported a total unobligated balance of $295.8 million at the end of fiscal year 2005, or 61 percent of the $483.8 million available for obligation (total fee revenues collected plus unobligated balance and recoveries). In response to our survey, 75 percent of fee-collecting units in NPS, BLM, and FWS reported unobligated balances at the end of fiscal year 2005. Furthermore, 93 percent or 107 of 115 of the FS’s national forests reported unobligated balances. FS headquarters reported unobligated balances at the forest level, and the balances were not available for individual units (ranger districts) because of changes in their accounting system. The fiscal year 2005 revenue, unobligated balance and recoveries, funds obligated, and unobligated balances reported by the four agencies are provided in table 5 below. A 5-year history of the agencies’ recent revenue and obligations are provided in appendix IV. Typically, units collecting recreation fees had an unobligated balance of these funds in their accounts at the end of fiscal year 2005 because not all funds collected during a fiscal year are spent during that fiscal year. According to the NPS Facilities Management Specialist, the majority of revenues, especially at large western park units, are typically collected during the last 3 months of the fiscal year and, therefore, are unlikely to be obligated that same year. We also found that at the end of fiscal year 2005, unobligated balances for many of the units or forests exceeded the revenues collected that year. For example, on the basis of our survey responses, 114, or 42 percent of 270 BLM, NPS, and FWS units and 63 of 107 FS forests with unobligated balances had balances that were greater than 100 percent of the total fee revenue they reported for fiscal year 2005. Table 6 shows for each of the four agencies the number of units/forests with unobligated balances and those with unobligated balances that exceeded the annual revenues collected by 100 percent or more. Also, on the basis of information provided by units responding to our survey and information provided on national forests, the top 10 units with the largest unobligated balances at the end of fiscal year 2005 were all in NPS. Table 7 lists the 10 units with the highest unobligated balances compared with their fiscal year 2005 fee revenues. Appendix V provides a listing of the top 10 units with the largest unobligated balances in all four of the agencies. REA provides a mechanism for units to reduce their unobligated balances. As part of the new REA authority for the recreation fee program, Congress included a provision that allows the Secretary of the Interior or the Secretary of Agriculture to reduce the percentage allocation of the recreation fees and site-specific pass revenues to a unit from 80 percent to 60 percent for a fiscal year. This authority can be exercised if the Secretary determines that the revenues collected at the unit or area exceed the reasonable needs that may be addressed during a fiscal year. As part of the interagency guidance developed for the implementation of REA, the Secretaries have agreed to delegate to the individual agencies the authority to develop and implement policy for this provision, including identifying the metrics and benchmarks required to determine when a unit’s revenue retention may be reduced and devising a method for distributing the remaining funds. To date, none of the agencies have completed the process of establishing final criteria for implementing this provision, although it is reportedly under discussion in NPS. Those recreation fee collecting units reporting an unobligated balance cited a variety of reasons for why all available funds were not obligated. To a moderate, great, or very great extent, units cited the following as the most common reasons for their unobligated balances: (1) saving funds to ensure they had sufficient funds to pay for large projects, (2) saving funds needed for the following season’s operations, (3) lack of personnel to administer and implement projects on a more timely basis, and (4) completing environmental compliance or analysis. Table 8 provides a complete list of reasons cited for the unobligated balances overall, and by each agency, and the percentage of the units citing the reason to moderate, great, or very great extent. The following examples highlight some of the reasons for unobligated balances at specific units. Officials at Yosemite National Park, the unit with the highest unobligated balance of about $36.7 million or 245 percent of its annual revenue, cited the following as the primary reasons for its unobligated balance: legal actions need to be resolved that have delayed spending on certain projects and the lack of personnel to manage, oversee, and implement the projects planned for these funds. Park officials said that unobligated funds accumulated in the early years of the Fee Demo program when obligations were lower relative to collections. Obligations have now increased as major projects have passed the planning and design phase. Another factor in the amount of obligations for projects funded with recreation fees was that the same Yosemite staff concurrently managed the 1997 flood recovery work funded by an appropriation. The flood recovery work occupied the same Project Managers that manage recreation fee funded projects thereby reducing the amount of work and obligations under that program. A major part of the fee revenues are planned for utility projects that are under way, including replacing sewers and reconstructing other utilities, the staff said. Officials at Grand Canyon National Park, with an unobligated balance also of about $36.7 million, or 184 percent of its annual revenue, stated that the primary reasons for its unobligated balance that has accumulated over at least 3 years were the need to save funds for large projects and the lead time needed to complete design and engineering work that had delayed the actual expenditure of most funds allocated for a particular project. Park staff reported plans to use the unobligated balance primarily for an alternative transportation system for park visitors, involving parking area and road construction, and upgrading the current shuttle bus system. These improvements are expected to cost approximately $47 million and take 9 years to complete in phases using unobligated funds already accumulated, as well as a portion from future fee revenues. Officials at BLM’s Coos Bay District Office in North Bend, Oregon, cited saving funds for large projects, needing funds for the next season’s operations, and using other appropriated dollars before fee revenues as the primary reasons for its unobligated balance. Coos Bay’s unobligated balance was about $320,000 at the end of fiscal year 2005 or about 202 percent of the fee revenues. Officials at Crab Orchard National Wildlife Refuge in Marion, Illinois, reported an unobligated balance of about $645,000 at the end of fiscal year 2005, which was 184 percent of its fee revenue. Refuge officials cited needing to save funds for a large project, completing design and engineering work, and needing funds for the next season’s operations as the primary reasons for the unobligated balance. Shasta-Trinity National Forest in northern California had an unobligated balance of about $2.8 million in fiscal year 2005, which was 246 percent of its fee revenue and the largest reported for a national forest. Forest staff cited the need to save these funds to cover programs and services during the next year that were previously funded with the fee revenue from the marina area. Under REA, the unit is no longer authorized to keep approximately $900,000 in annual marina revenues that the unit collected under the Fee Demo program from marina operations. Staff indicated the unobligated balance will be used to continue a number of marina area programs including a fish rearing program, boat patrols, floating toilets, illegal dump cleanups, boating safety program, and interpretive programs that began under Fee Demo. During our site visits and in response to our survey, recreation fee- collecting units also provided many examples in which recreation fee revenues were used in conjunction with other general appropriated funds, donations, or other revenues to complete projects within their units. According to responses from units in the four agencies responding to our survey, 58 percent of the units indicated that they believed to a moderate, great, or very great extent that recreation fee revenues are being used to fund projects formerly funded with other general appropriations at their unit, such as the construction account. The percentage of units within each agency that expressed this opinion varied from a high of 65 percent in FS to a low of 46 percent in FWS. In addition, about 64 percent of the units believed to a moderate, great, or very great extent that, over the next 5 years, fee revenues will be used to fund projects that would have been funded with other general appropriated dollars. The portion of respondents in each agency believing this was 74 percent in BLM, 67 percent in FS, 57 percent in FWS, and 58 percent in NPS. In contrast to the opinions of unit level officials, FS and DOI comments on a draft of this report noted that fee revenues have not historically replaced appropriations and denied there is any reason to expect this to change in the future. We identified a number of NPS projects similar to those funded by other general appropriations, such as items typically funded by the construction appropriations account, which are being or have been funded wholly or in part by recreation fee revenues. For example, the fiscal year 2006 construction appropriation for NPS includes $11.8 million for a conversion to narrowband radios to ensure rapid response to emergency and life- threatening situations. NPS stated in its fiscal year 2007 budget justification that it was proposing to reduce funding for the narrowband radio system program in order to fulfill higher priority needs in other areas. NPS added that to minimize the delay in achieving full conversion to narrowband radio equipment, those systems that are to be converted after fiscal year 2005 will be funded through construction appropriations and augmented, as necessary, by other NPS fund sources, such as recreation fee revenues. In response to our survey or during our site visits, many NPS units reported completed, planned, or ongoing expenditures from recreation fee funds for the narrowband radio upgrade, including: Yosemite National Park, $3.4 million; Grand Canyon National Park, $3.0 million; Lake Mead National Recreation Area, $1.0 million; Gateway National Recreation Area, $1.7 million; Sequoia-Kings Canyon National Park, $0.9 million; Acadia National Park, $0.7 million; Olympic National Park, $0.7 million; Channel Islands National Park, $0.7 million; Great Smokey Mountains National Park, $0.6 million; and Glacier National Park, $0.6 million. NPS officials said the decision to fund the radio upgrade with fee revenues was made because of concern that construction appropriations would not be enough to fund the new system. Many NPS units listed other projects that have been funded wholly or in part by recreation fee revenues similar to those previously funded by general appropriations, such as construction appropriations account. See table 9 for a list of examples. In addition, many of the unit staff we visited or who commented on our survey stated that recreation fee revenues are essential to providing services at their recreation areas that would not otherwise be funded. The following is a sampling of such comments from units in each agency: “The recreation fee program has been a great asset to the overall recreation program. Without these dollars coming back into the system to help augment other appropriation dollars, BLM could not continue with current standards for existing facilities, developing new facilities, providing proper monitoring of special recreation permits, or to provide the public with service they need and deserve.” “Unfortunately, our recreation fee funds collected have become the primary source of revenue for our (unit). This was not the original intent of the fee demo program but with shrinking budgets it has become our main funding source.” “In this time of declining budgets and increasing use of national forests as the Baby Boomer generation retires, a loss of REA funds would be devastating to our ability to provide recreation opportunities.” “Our unit has become very much dependent on REA funds to provide basic care and maintenance activities of our developed facilities. These include the high costs of solid waste disposal; toilet pumping and disposal; and maintaining a seasonal workforce to meet standards and guidelines for recreation management.” “Funding for projects via the recreation fee program has enabled the park to make modest improvements in visitor facilities and services. Without the recreation fee program, very little of work that has been done would have been done.” “Recreational fee revenues allow us to accomplish projects which wouldn’t have been accomplished with other (general) appropriated funds. While some of the more urgent projects might have been accomplished with other (general) appropriations, fee dollars enable us to accomplish much more.” “Most public use activities and projects would not be conducted if we did not have funds from a recreation fee program.” “The recreation fee program has provided additional revenue to support visitor needs and enhance the visitor experience. Without these funds, we could not provide visitors with a high quality of visitor service.” REA was essentially designed to mitigate past problems with the recreation fee demonstration program, such as having multiple passes that caused visitor confusion, provide a more sustainable long-term authority to support effective planning and management of fee programs, encourage increased public participation, protect recreational resources, and provide the public with quality visitor services. In addition, REA authorized a new multiagency recreation pass to help relieve visitor confusion associated with having to use multiple passes to access and enjoy federal recreation sites. REA was enacted almost 2 years ago, and our early assessment of the participating agencies’ implementation of the act indicates that they are making progress. Still, there are areas in need of management attention. Two key working groups established to facilitate REA implementation have yet to take important steps to carry out REA, such as completing necessary tasks to allow RRAC requirements to be fully implemented, which will enhance public participation requirements. Also, our analysis indicates that some of the DOI agency and FS units are struggling with how to interpret certain aspects of the agencies’ interim guidance for implementing the act, which has caused confusion regarding the types and amount of fees to collect. Furthermore, unit officials are in need of guidance on facilitating public participation and how to ensure projects funded with REA fees are connected to the visitor experience. Unless actions are taken to issue final regulations and implementation guidance for the fee program, including detailed policy and procedure guidance, many unit officials will continue to struggle with how to effectively and consistently implement the recreation fee program. Measures the agencies have in place to control and account for collected fee revenues is another area that needs attention. While the results of our analysis cannot be projected to all fee-collecting sites, we noted weaknesses in the controls over fee collections at some BLM, FWS, and FS sites that warrant attention because they not only affect the accounting for the collected revenues, but they may also affect the safety of the individuals involved in the collection efforts. Although millions of dollars are collected annually through REA, some agencies have not provided adequate guidance or conducted routine audits needed by the units to ensure that they develop and maintain proper controls over their fee revenues and provide reasonable physical protection for their staff. Despite the fact that Congress intended all five federal land management agencies to implement REA, Reclamation has not determined whether it will implement the act. Unlike the other participating agencies, Reclamation operates most of its recreation sites through partnerships that collect fees to support the costs of administering the recreation programs they provide. Reclamation has determined that its recreation areas that are managed by nonfederal partners will not be participating in REA, and thus will not accept the new multiagency pass. Further, the federal managing partners will be allowed to decide on their own how REA impacts the recreation areas located on Reclamation lands that they manage. Reclamation has not yet decided what actions to take with regard to those units managed by Reclamation that it identified as meeting REA criterion for charging recreation fees. To allow for public input on new fees or modifications to existing fees, we recommend that the Secretaries of the Interior and Agriculture expedite completing the steps needed for the RRACs and existing advisory councils to begin implementing REA. In order to improve agencies’ implementation of the Federal Lands Recreation Enhancement Act and improve the accountability and controls for recreation fee collection, we recommend that the Secretary of the Interior direct the Director, National Park Service; Director, Bureau of Land Management; and Director, Fish and Wildlife Service to promptly issue final regulations and implementation guidance on the fee program, including detailed policy and procedure guidance; and Director, Bureau of Land Management and Director, Fish and Wildlife Services to ascertain the extent to which their units do not have effective processes and procedures for accounting for and controlling collected fees and develop guidance for implementing appropriate and effective internal controls over cash management. This guidance for implementing such controls should identify and encourage the use of best practices, such as routine audits. We recommend that the Secretary of the Interior direct the Commissioner of the Bureau of Reclamation to expedite its decision on implementation of REA. In order to improve the Forest Service’s implementation of the Federal Lands Recreation Enhancement Act and improve the accountability and controls for collected recreation fees, we recommend that the Secretary of Agriculture direct the Chief of the Forest Service to take the following two actions: promptly issue final regulations and implementing guidance on the fee program, including detailed policy and procedure guidance; and ascertain the extent to which its units do not have effective processes and procedures for accounting for and controlling collected fees and develop guidance for implementing appropriate and effective internal controls over cash management. This guidance for implementing such controls should identify and encourage the use of best practices, such as routine audits. We provided the Departments of the Interior and Agriculture with a draft of this report for review and comment. Their written comments are provided in appendixes VI and VII, respectively. DOI generally agreed with our findings and recommendations. It said that our recommendations further REA implementation efforts and it was dedicated to addressing them promptly. Specifically, with regard to issuing final regulations and implementation guidance for the new interagency pass, the department said that, while guidelines had not been formally completed, most of the policy decisions composing the guidelines have been taken and discussed in congressional testimony. Although this may be the case, the results of our survey and site visits indicated that those who are to implement REA in each of DOI’s agencies are in need of clarifying guidance, particularly with regard to adding new fee sites or modify existing fees to fully implement the act, which will also help to ensure consistency in applying the requirements of REA. We also recommended that DOI direct that BLM and FWS ascertain the extent to which their units have effective processes and procedures for accounting for and controlling collected fees and develop effective guidance and internal controls over cash management, such as routine audits. Although this recommendation was not directed at the Park Service, the department’s comments state that the Park Service has the intention to recommitting to a National Audit Program. It said that such a program has been delayed due to other program priorities and lack of staff resources. However, it said that the Park Service has a working group that is being reconvened to restart the process of developing a National Audit Program and that, once additional resources are in place, it will be possible to implement a more standardized program. On the basis of our visits to eight sites, we observed practices for controlling and accounting for fee revenues that appeared to be working well at these locations in the Park Service. However, we are encouraged by the additional actions that the Park Service plans to take to improve their processes in this area and any lessons learned from this effort may also benefit BLM and FWS. With regard to our recommendation that the department direct Reclamation to expedite its decision on implementing REA, the department provided comments from the bureau that said that the bureau had only identified seven sites that currently meet the statutory criteria for charging standard amenity fees under REA. Given this fact and the likely costs of implementing REA for the agency, it said that there is a strong possibility that Reclamation would require all recreation sites meeting the criteria to participate in REA. However, as recognized in our report, Reclamation should decide this issue soon so that its units can begin taking the needed steps to implement REA. The Department of Agriculture did not specifically state its agreement or disagreement with our recommendations. However, it outlined actions it has planned or under way to address them. Specifically, it acknowledged the Forest Service’s need to revise several policies that relate to REA and collections in general. It said that the Forest Service had already initiated policy revisions for its manuals and handbooks, which it plans to produce by September 2007. It also said that the Forest Service is in the process of revising its policies on billings and cash collections, which it will expedite for publication as soon as practicable. Both DOI and the Department of Agriculture provided other comments for updating information in the report or for providing technical clarifications that we have incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 2 days from the report date. At that time, we will send copies of this report to the Secretary of the Interior, the Secretary of Agriculture, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3841 or nazzaror@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VIII. Based on the congressional request letter of May 2005 and subsequent discussions with your staffs, we agreed to determine (1) what agencies have done to coordinate the implementation of the Federal Lands Recreation Enhancement Act (REA), including preparing for the new interagency federal lands pass; (2) what agencies have done to implement the REA fee and amenity requirements and sufficiency of guidance for REA implementation; (3) the extent to which the agencies have control and accounting procedures for collected recreation fee revenues; (4) how participating agencies prioritize and approve activities and projects funded with fee revenues; and (5) the extent to which units have unobligated fund balances and if recreational fees are being used to fund projects formerly funded with other appropriations. In addition, we are providing information on how recreation fees vary by type, amount, and level of amenities offered at units with similar recreational opportunities across and within agencies participating in REA. To address the objectives, we obtained and reviewed applicable laws; regulations; agencywide policies and procedures; regional policies and procedures; and the fees collected at selected units under the Fee Demonstration Program and REA in order to determine what changes have resulted since the implementation of REA. We developed and administered a nationwide survey to agency officials responsible for fee programs under REA. We supplemented the survey information with records reviews, analyses of documents, and testimonial evidence gathered during unit visits and in meetings with state, regional, and headquarters officials. To obtain information on all of our objectives related to the implementation of REA, the collection and expenditure of recreation fee revenues, we designed and administered a national survey of units collecting these fees. We worked to develop the survey instrument with social science survey specialists to administer to staff at National Park Service (NPS) units, Forest Service (FS) ranger districts, Bureau of Land Management (BLM) field offices, and Fish and Wildlife Service (FWS) refuges. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, in the sources of information that are available to respondents, or how the data are entered into a database can introduce unwanted variability into the survey results. We took steps in the development of the surveys, the data collection, and data analysis to minimize these nonsampling errors. For example, prior to administering the survey, we pretested the content and format of the surveys with several site officials at each agency to determine whether (1) the survey questions were clear, (2) the terms used were precise, (3) respondents were able to provide the information we were seeking, and (4) the questions were unbiased. In addition, we provided a draft of the survey to the national fee program coordinators at the four agencies and met with them to obtain comments and corrections to the wording and structure of the questions in the survey. We made changes to the content and format of the final questions based on pretest results. We verified some financial information from a random sample of 25 non-FS units by asking these respondents to check their answers originally provided to four questions and verify the reported dollar amounts or provide corrections. Our analysis showed that a significant number of units reported they had made errors in providing the original survey data; however, total dollars reported after correction did not differ significantly from the dollars reported originally. The revised sum of the total fees collected differed from the original by less than plus or minus 2 percent and for unobligated balances, less than plus or minus 4 percent. We also checked a sample of cases to ensure the accuracy of data entry and made corrections as needed. We performed computer analyses to identify inconsistencies in responses and other indications of error. We contacted survey respondents, as needed, to correct errors and verify responses. In addition, a second independent analyst verified that the computer programs used to analyze the data were written correctly. It is our opinion that the data we present is valid and reliable for the purposes of this report. To identify the current fee-collecting units to complete the survey, we asked the national fee program coordinators at NPS, FS, BLM, and FWS to provide a full list of these units including phone numbers and e-mail addresses. The survey was designed to be distributed as a locked MS Word document attached to a transmittal e-mail, allowing the document to be saved on unit computers, altered with answers, and returned via attachment to e-mail messages. Respondents were instructed to complete one survey per unit to reflect all recreation fee activities managed as a single unit. Based on phone calls from respondents and returned surveys, we determined some surveys included responses for more than one unit on the list provided by the agency. In other cases, surveys were returned for units that were not on the list. In either case, we contacted the unit to determine the status of the unit. If we found that the management of a fee program extended across the boundaries of more than one unit, and its fee collection and spending were combined, with funds commingled and project priorities jointly determined, then we accepted a single survey for more than one unit. Because some units were combined, and because others indicated that they were not collecting fees, contrary to the lists provided by agency headquarters, the number of units in the universe for this survey declined. Table 10 provides the estimated response rates by agency and overall. The number of units identified, 904, reflects the total number of units remaining in the universe after adjustments, when units reported that they had been administratively merged with another unit or, in a few cases, when they were not included on the list provided by the agencies. The units identified by the agencies included units that indicated that they did not collect fees under REA. These respondents were not intended to have been included in the universe and, therefore, were dropped from the analysis of responses. Once the surveys were received, logged in, and printed, they were checked for completeness and logic, and the responses were then coded into a database for summarization and analysis. To assess the accounting and control procedures in place at various fee- collecting units, we conducted unit visits to a sample of unit locations where we collected documents, observed accounting and control practices, and interviewed staff. Information that we gathered during our site visits and during our interviews represents only the conditions present in the units at the time of our review. We cannot comment on any changes that may have occurred after our fieldwork was completed. Furthermore, our fieldwork focused on in-depth analysis of only a few selected units. Based on our interviews, we cannot generalize our findings beyond the units and officials we contacted. As of 2005, four of the five agencies were actually collecting recreation fees under REA—the same four that had previously been authorized to collect fees under the Fee Demo program. The amounts of recreation fee collections varied substantially among the four agencies. For example, NPS’s top fee-collecting unit, Grand Canyon National Park, collected $15,773,239 in Fee Demo revenue in fiscal year 2003, while FWS’s top fee- collecting unit, Chincoteague National Wildlife Refuge, collected $658,497 in the same fiscal year, and there were only seven units within the entire FWS agency that collected over $100,000 in recreation fee revenue. Therefore, in order to ensure that we visited fee units of varying sizes within each of the four agencies, we created different small, medium, and large fee categories for each agency. These categories were identified by sorting the fee-collecting units within each agency from highest to lowest in fee revenue for fiscal year 2003. After sorting the data by fee revenue, we analyzed the data to see where natural breaks for small, medium, and large units fell, in order to determine the categories for each agency. The resulting categories are shown in table 11. Our original plan was to visit at least three large units, two medium units, and one small unit within each agency. In addition, to address congressional concerns about large unobligated carryover balances, we planned to visit at least two more units with very large carryover balances. We also recognized the importance of visiting units in several different geographic areas to document possible differences in the implementation of the fee programs within different states or regions. We completed this original methodological plan with one exception; we only visited two large FWS units. However, given the relatively small size of even the largest of FWS’ fee-collecting units, we believe that our review had already sufficiently covered their program. Table 12 identifies the recreation fee units we visited. Finally, we spoke with headquarters officials at all five agencies to obtain their views on the implementation of REA, their plans for future monitoring and assessment activities, the status of the new interagency federal lands pass, and their opinions on the future impact of REA fees on their agency’s appropriations. We conducted our work between June 2005 and August 2006 in accordance with generally accepted government auditing standards. This appendix provides information on how the fees charged and the amenities provided for use of recreational units across the country vary by the activity offered, the provisions of the Federal Lands Recreation Enhancement Act (REA) and the agency offering them. For example, under REA, units of the National Park Service (NPS) and the Fish and Wildlife Service (FWS) are authorized to charge entrance fees for accessing the lands they manage. REA does not specify minimum amenity requirements for entrance fees. The Bureau of Land Management (BLM) and Forest Service (FS) units, on the other hand, are authorized to charge standard amenity fees, not entrance fees. Unlike entrance fees, REA specifies the minimum amenities required at recreation sites to charge this fee. Of the 271 NPS and FWS units responding to our survey, 168, or 62 percent had entrance fees. Of the 168 units with entrance fees, 137, or 82 percent were NPS, and 31, or 18 percent, were FWS units. In NPS, the entrance fees ranged from a low of $1 per person to a high of $300 per bus or group, while FWS units reported an average fee ranging from a low of $1 per person to a high of $50 per bus. The entrance fees are typically charged per visit, on per vehicle, per person, per group, or commercial vehicle bases, as well as on an annual basis. Table 13 shows the number of units that reported charging an entrance fee and the minimum and maximum fees charged for the various entrance categories. Standard amenity fees were authorized by REA to be charged for federal recreational lands and waters under the jurisdiction of BLM, Reclamation, or FS. As mentioned earlier in this report, Reclamation has not implemented REA and, therefore, is not included in these results. The law sets criteria for the establishment of standard amenity fees: the area where charged must have significant outdoor recreation, a substantial federal investment, allow efficient collection of fees, and must have the following amenities: designated developed parking, a permanent toilet facility, a permanent trash receptacle, interpretive sign, exhibit or kiosk, picnic tables, and security services. Of the 472 survey responses from BLM and FS units, 38 of 85 (45 percent) BLM units and 157 of 387 (41 percent) of FS units reported having standard amenity fees. BLM’s units responding to the survey had standard amenity fees ranging from a low of $1 to a high of $10 for each person and from $2 to $10 per vehicle. FS’s units reported standard amenity fees ranging from a low of $0.50 per person to a high of $7.50 per person and per vehicle standard amenity fees that ranged from $1 to $50. Table 14 outlines the number and types of standard amenity fees charged and the range of fees of each category reported. Our survey identified 195 BLM and FS units that reported charging a standard amenity fee for recreation use in their units. In addition to the six amenities required under REA to charge a standard amenity fee, many of the units reported providing various other amenities for the visiting public. Table 15 shows the various amenities provided at the 195 BLM and FS units, for either the minimum or maximum (if any) standard amenity fee, including the amenities required under REA for the unit to charge a standard amenity fee. Our survey also identified that 52 of the 195 units that charge standard amenity fees had more than one standard amenity fee. For example, one recreation site at a unit could offer such amenities as attendant fee collection in addition to the amenities required by REA and charge a fee of $3 per person. Another recreation site at the same unit could offer these same amenities but charge a higher fee amounting to $5 per person because it also offers additional amenities such as picnic shelters and drinking water. Of the 52 units with more than one standard amenity fee, the five most common additional amenities offered for the higher fee were picnic shelters, drinking water, shower or bath house, fire ring or grill, and a permanent trash receptacle. It should not be implied that the higher fees are solely due to these added amenities. However, according to our survey results, the units responding indicated that the level of amenities offered was one of the most influential factors in determining the type and amounts of fees charged. Other factors that had a significant influence on these fees were professional judgment, fees at comparable sites, and agency policy. We also collected information on the various types of activities, amenities, or services for which units charge a fee, other than entrance fees. These could be standard or expanded amenity fees and special recreation permit fees authorized by REA. The most common activities, amenities, or services for which a fee is charged are camping, outfitter or guides, day use, Christmas tree cutting, and cabin rentals. Table 16 shows the number of units charging a fee under REA for the various types of activities, amenities, or services provided. To determine the extent to which similar fees are charged for similar activities or services, we asked units for further details on the specific fees charged for a few of the common activities or services at recreation units: camping, motor boating, and access to a body of water for rafting, canoeing, or kayaking. Specifically, we asked the units to identify a minimum and maximum fee for the activity or service, as well as the amenities provided for the fee charged. To illustrate, a campsite at a unit may charge $5 per night per individual for camping and for that fee provide only a site to put up a tent, whereas another campsite at this unit may charge $10 per night per individual and provide a site, shower facilities, drinking water, and electrical service. Our analysis of responses from NPS, BLM, FS, and FWS units indicated that there was a wide range of fees charged for these common activities or services, and a variety of amenities were available at the locations where these fees were charged. Units may have more than one campsite available for recreation and charge fees for their use. Our survey asked each unit to identify the fees and amenities for their lowest priced campsite and for their highest priced campsite. The fees charged for a campsite in BLM, FS, FWS, and NPS units ranged from a low of $2 in BLM and FS to a high of $225 in FS. This range includes both individual and group campsites. Table 17 shows the number of units offering camping for a fee, their median fees, and range of fees charged for the lowest and highest priced campsites. Camping for a fee is offered in 55 percent of the units responding to our survey. FS had the greatest percentage of units offering camping for a fee, 71 percent of units responding. FWS had the lowest, with only 2 percent of the units responding offering camping for a fee. We asked units to identify which amenities were provided at the campsites with the minimum and maximum fees within that unit. Overall, an average of 10.7 amenities was offered at the minimum fee campsites, and an average of 12.1 amenities was offered at the maximum fee campsites. The amenities most often available for the maximum fee sites, and not the minimum fee sites, are drinking water, availability of reservation system, electrical hookups, water hookups, and sanitary dump stations. Within the individual agencies, the difference in number of amenities between minimum fee camping sites versus maximum fee camping sites was on average within two amenities or fewer. Our analysis of responses from NPS, BLM, FS, and FWS units on the minimum and maximum fees for motor boating charged in the unit also focused on amenities available for these fees. Boating for a fee is offered in 10 percent of the units overall with FS having the largest number of units with motor boating for a fee available, 52 units, or 13 percent of FS units responding. FWS has the lowest number of units with boating fees, with only 4, or 4 percent of units reporting a motor boating fee. The results of our survey on the extent of motor boating fees are given in table 18. Motor boating fees at units in the four agencies surveyed are charged on a number of bases: per person, per boat, or other bases, such as a per trip charge. A total of 5 FS and FWS units reported charging motor boating fees on a per person basis, with the minimum and maximum fees per person starting at $1 and ranging up to $4. A total of 41 units in all four agencies reported charging on a per boat basis, with the minimum fees starting at $1 per boat and ranging up to a maximum of $40. Fees charged on various other bases, such as per trip, were reported in 30 units, with the fees starting at $0.50 and ranging up to a maximum of $300. Survey responses showed that only 9 of the 72 units with a fee for motor boating and related activities had a maximum fee in addition to the minimum fee listed for these activities. We asked the units with a motor boating fee to identify which common amenities were provided for boating with the minimum and maximum fees within that unit. Overall, an average of 11.5 amenities was offered at the minimum fee areas, and an average of 11.9 amenities was offered at the maximum fee areas, virtually the same when considering all the units. The third type of fee we asked survey respondents about was special recreation permit fees for access to a body of water for rafting, canoeing, or kayaking. A total of 45 units reported this type of fee, with the greatest number in FS and BLM, and few reported by NPS or FWS. Table 19 provides a breakdown of the agency units reporting on our survey a special recreation permit fee for these activities. These special recreation permit fees for rafting, canoeing, or kayaking at units in the four agencies surveyed are charged on a number of bases: per person per day, per group per day, per boat per day, per trip, or other bases. A total of 12 BLM, FS, and FWS units reported charging per person per day fees for this activity, with the fees per person starting at $1 and ranging up to a maximum of $6. A total of 9 units in BLM and FS reported charging on a per person per trip basis, with the fee starting at $3 and ranging up to a maximum of $404. Fees were charged on various other bases, such as per group per day, or per boat per day, with the fees starting at $1 and ranging up to a maximum of $90. Our survey showed that only 10 of the 45 units with special recreation permit fees for rafting, canoeing, or kayaking had a maximum fee in addition to the minimum fee listed for these activities. We asked the units with a special recreation permit fee for rafting, canoeing, or kayaking to identify which common amenities were provided for boating with the minimum and maximum fees within that unit. Overall, an average of 9 amenities was offered at the minimum fee areas, and an average of 9.3 amenities was offered at the maximum fee areas, virtually the same when considering all the units. This appendix provides information on the organizational structure, costs, and membership requirements of Recreation Resource Advisory Committees (RRAC). In March 2006, the Department of the Interior (DOI) and the Department of Agriculture (USDA) approved the organizational structure for the RRACs and existing advisory councils via an interagency organizational agreement. Table 20 outlines the nature and type of RRACs and advisory councils that Bureau of Land Management (BLM) and Forest Service (FS) have agreed to use in each state and/or region. In the majority of western states, BLM and FS will use joint RRACs or committees, many of which will be composed of existing BLM advisory councils since the Federal Lands Recreation Enhancement Act (REA) allows existing advisory committees or fee advisory boards to perform the RRAC duties. In addition, five new RRACs are being established nationwide. Two of the new RRACs are being formed in the eastern United States and will primarily address FS fees since BLM has minimal land and only one fee-collecting unit in the East. Of the two new eastern RRACs, one will cover all of FS Southern Region (Region 8), and one will cover all of FS Eastern Region (Region 9). The remaining three new RRACs are being formed in the western states: one joint RRAC covering all of California, one joint RRAC covering Washington and Oregon, and one joint RRAC covering Colorado. The March 2006 interagency organizational agreement states that BLM is responsible for the direct costs of its advisory councils, while FS will be responsible for the direct costs of the new RRACs. FS estimates that it will cost about $90,000 to $120,000 per year to fund each new RRAC—based on travel costs for the RRACs to meet twice per year, FS staff time, and the assumption that each RRAC will have one to five subcommittees—and all funding for the RRACs will come from the FS’s 5 percent regional funds, according to a FS headquarters official. However, implementing the RRACs may cost more the first year since the members of the RRACs may need to meet more frequently than twice per year during the initial establishment of the RRACs. BLM is allocating $3,000 per state per year in base funds starting in fiscal year 2007 to implement the RRAC requirements where existing advisory councils are used, according to a BLM headquarters official. The new RRACs will be composed of 11 members. In appointing members to the RRACs, the Secretary is to provide for a broad and balanced representation of the recreation community; table 21 outlines the requirements for the composition of the RRACs. Nominations for the new RRACs will be solicited during a 30-day nomination period established in a Federal Register notice that will be published once the interagency agreement is finalized. The Secretary of Agriculture will make formal appointments to the RRACs once the nominations are received and evaluated by FS. According to agency officials, it is unknown how long the appointment process will take but it is hoped that nominees will be appointed within 90 days of issuance of the interagency agreement, which occurred on September 1, 2006. The RRACs and existing advisory councils may form subcommittees to allow for local representation and to provide additional advice and recommendations to the RRAC or existing advisory council. DOI and USDA will be providing advice on subcommittee membership; however, final determination on whether subcommittees are utilized and their membership will be the determination of the existing advisory councils and not the agencies involved. This appendix provides information on the fee revenue and obligations collected under the Recreational Fee Demonstration Program (Fee Demo) and the Federal Lands Recreation Enhancement Act (REA) as reported by the agencies to Congress. Table 22 shows the fee revenue, the funds obligated, and unobligated balances for fiscal years 2001 through 2005 for the Department of the Interior’s Bureau of Land Management, Fish and Wildlife Service, the National Park Service, and Department of Agriculture’s Forest Service. This appendix provides information on the unobligated balances for recreation fee funds collected under the Recreational Fee Demonstration Program (Fee Demo) and the Federal Lands Recreation Enhancement Act (REA). Table 23 shows the 10 units with the largest unobligated fund balances of recreation fees for Bureau of Land Management (BLM), National Park Service (NPS), Fish and Wildlife Service (FWS), and Forest Service (FS). The following are GAO’s comments on the Department of Interior’s letter dated September 12, 2006. 1. We disagree with NPS’s comment concerning distribution of the new pass revenues. The NPS noted in its comments that “the revenue share formula appears to be identified.” However, the details of the formula have not yet been determined. The working group has only determined that the revenue will be distributed based on a formula that takes into account pass use and other factors. Therefore, the long-term revenue distribution strategy is unclear. 2. We stand by our description of NPS’ efforts to update its guidance on accounting and controlling collected fee revenues as “slow.” NPS’ current guidance on this subject was last published in 1989, which it recognizes as needing to be revised. In addition, the NPS’ statement that our report language indicates “NPS does have effective internal controls in place,” is incorrect. We specifically state that NPS units we visited, which are only 8 of 390, appear to have implemented reasonable accounting procedures and effective internal controls. Thus, we cannot attribute this condition to the entire universe of NPS units. 3. BLM provided additional detail on the results of their inquiries to units that responded to our survey that they did not offer all six amenities required to charge standard amenity fees under REA. We have summarized this information on pages 27 and 28 of the report. In essence, BLM officials imply from the results of their inquiries that their units did offer all six amenities required under REA. This new information would have to be verified to attest to its accuracy. 4. We agree that “slow” is a relative term but believe that it is used appropriately in the context of the information presented in the paragraph pertaining to the development of the RRACs. The information in the paragraph notes that, according to a June 2005 interagency presentation, the RRACs were expected to be established with members appointed by the end of 2005. Since none of the members for any of the RRACs have been appointed, we feel comfortable in referring to this RRAC development as “slow.” 5. One of REA’s goals was to reduce visitor confusion. We believe that Reclamation’s approach to allowing each managing partner to make its own determination of how REA impacts each unit, to include the decision of whether or not to accept the new pass, will create an inconsistent and more confusing system of fees for the visitors. 6. We disagree with DOI’s contention that evaluating the validity and interpreting the responses of units to our survey is problematic, since we have simply reported the survey results. The results we reported are based on two opinion survey questions that we asked both FS and DOI field-level officials to respond to. The first question asked if the unit officials believed recreation fee revenues are being used to fund the types of projects formerly funded with appropriations at their unit. The second question sought the officials’ opinions about the extent to which they believe that recreation fee revenue will be used to fund the types of projects over the next 5 years at their units that would have been funded with appropriated dollars. We recognize that individual units do not have agencywide perspectives on these issues. Also, including these survey results in our report is not intended to forecast the future, but rather to share the perspective of the survey respondents responsible for the on-the-ground implementation of this program. No GAO conclusions or recommendations were based on these stated perceptions. 7. The concerns noted in the background are general concerns that are frequently cited by critics of the fee program. Implementing both Fee Demo and the recreation fee program under REA has been controversial and many people and groups, such as the Western Slope No-Fee Coalition and the Arizona No-Fee Coalition, have spoken out against recreation fees. 8. Same as comment 4. 9. Since the working group has not determined a formula and how pass use or other factors will be considered in such a formula, it is still not clear how revenues will be distributed beyond the first 3 to 5 years of the pass program. Therefore, the long-term revenue distribution strategy is unclear. The potential problems with collecting pass use data are outlined in this report. Accurate pass-use data will be difficult to collect at remote locations and many units within the agencies do not have the infrastructure in place to collect pass-use data. This may lead units to have inaccurate pass-use data or data largely based on estimates. Since pass use revenue distribution will be tied to the pass- use data, units and/or agencies may benefit from submitting inflated pass-use estimates. All of these issues are potential problems with the collection of pass-use data. The following are GAO’s comments on the Department of Agriculture’s Forest Service letter dated September 7, 2006. 1. FS provided additional detail on the results of their inquiries to units that responded to our survey that they did not offer all six amenities required to charge standard amenity fees under REA. We have summarized this information on pages 27 and 28 of the report. In essence, FS officials imply from the results of their inquiries that many of these unit officials were not aware of the type of fees they were charging or the amenities offered for the fees charged under the REA authority when they replied to our survey. This new information would have to be verified to attest to its accuracy. 2. The GAO survey was sent to 467 FS Ranger District officials directly responsible for implementation of the fee program, under REA. The results we reported are based on two opinion survey questions that we asked both FS and DOI field-level officials to respond to. The first question asked if the unit officials believed recreation fee revenues are being used to fund the types of projects formerly funded with appropriations at their unit. The second question sought the officials’ opinions about the extent to which they believe that recreation fee revenue will be used to fund the types of projects over the next 5 years at their units that would have been funded with appropriated dollars. We recognize that individual units do not have agencywide perspectives on these issues. Also, the inclusion of these survey results in our report is not intended as a forecast of the future but rather as a way to share the perspective of the survey respondents responsible for the on-the- ground implementation of this program. No GAO conclusions or recommendations were based on these stated perceptions. In addition, we have added the FS statement that, historically, fee revenues have not replaced appropriations, and there is no reason to expect this to change in the future in order to also share the agency’s official perspective on this issue. 3. We agree that “slow” is a relative term but believe that it is used appropriately in the context of the information presented in the paragraph pertaining to the development of the RRACs. The information in the paragraph notes that according to a June 2005 interagency presentation, the RRACs were expected to be established with members appointed by the end of 2005. Since none of the members for any of the RRACs have been appointed, we feel comfortable in referring to this RRAC development as “slow.” 4. We recognize that the FS interim implementation guidelines for REA have definitions of the standard amenities; however, these guidelines have not prevented confusion about amenity criteria. In their comments on a draft of this report (bottom of page 104), the FS contends 31 of their unit officials erroneously reported they were out of compliance with REA’s standard amenity requirements because they were either confused over the difference between standard and expanded amenities, or, because “there was misunderstanding over the definitions of the amenities.” Such results further highlight the need for more specific FS guidance on implementing and managing the fee program. 5. We believe that for department-level management to have assurance that collected fees are controlled effectively and accounted for properly, detailed department- or bureau-level guidance on procedures are an important tool for local managers and imperative for those who have little or no formal accounting training or background. In addition to the individual named above, Roy Judy, Assistant Director; Carolyn Boyce; Elizabeth Curda; John Delicath; Denise Fantone; Doreen Feldman; Timothy Guinane; Anne Hobson; Susan Irving; Stanley Kostyla; Diane Lund; Robert Martin; Matt Michaels; Angie Nichols-Friedman; Lesley Rinner; John Scott; Jack Warner; and Amy Webbink made key contributions to this report.
In recent years, Congress has expressed concerns about the federal land management agencies' ability to provide quality recreational opportunities and reduce visitor confusion over the variety of user fees. In December 2004, Congress passed the Federal Lands Recreation Enhancement Act (REA) to standardize recreation fee collection and use at federal lands and waters. GAO was asked to determine (1) what the agencies have done to coordinate implementation of REA, (2) what agencies have done to implement REA, (3) the extent to which agencies have controls and accounting procedures for collected fees, (4) how projects and activities are selected to receive funding from fees, and (5) the extent of unobligated fund balances. To answer these objectives, GAO reviewed agency guidance, analyzed fee data, interviewed officials, visited 26 fee-collecting units, and administered a nationwide survey to 900 fee-collecting units. The Departments of the Interior (DOI) and Agriculture (USDA) established four working groups to facilitate interagency cooperation and coordination of REA implementation. Each working group has made progress, but some issues remain unresolved. For example, the Interagency Pass working group has yet to determine the price to charge for the new pass, which is to be implemented in January 2007. To implement REA, agencies reviewed their fee programs and made modifications to the fee programs at some of their units. For example, several of USDA's Forest Service units dropped 437 sites from their fee program, such as picnic areas, because they did not meet REA criteria. However, not all units are in compliance with REA. Many agency officials said that while the agencies have issued some interim guidance, REA was difficult to interpret and suggested the need for more specific and detailed guidance on the fee program. In addition, DOI's Bureau of Reclamation has not yet determined whether to implement REA. Reclamation is assessing how REA applies to its operations. Some agencies lack adequate controls and accounting procedures for collected recreation fees and lack effective guidance for establishing such controls. On the basis of visits, some units did not have an effective means of verifying whether all collected fees are accounted for. In addition, many units have not implemented a system of routine audits to help ensure that fees are collected and used as authorized and that collected funds are safeguarded. The various agencies participating under REA have different processes for selecting projects to be funded with recreation fee revenues. At DOI's Bureau of Land Management and Fish and Wildlife Service and USDA's Forest Service, most proposed projects are approved at the local unit level, usually within a few weeks. At DOI's National Park Service, fee projects are reviewed and approved at the unit, regional, and headquarters or department level before projects are funded. According to National Park Service officials, under this process, it can sometimes take a year or more to obtain approval for a requested fee project, which delays project implementation and contributes to unobligated fee revenue balances. Agencies have $300 million in unobligated fee revenue balances. Unit officials cited several reasons for the unobligated balances, such as the need to save for large projects. Many unit officials also said that recreation fee revenues are essential to providing services at their recreation areas that would not otherwise be funded.
The state and local government sector is likely to face persistent fiscal challenges within the next decade. In July 2007, we issued a report based on simulations for the state and local government sector that indicated that in the absence of policy changes, large and growing fiscal challenges will likely emerge within a decade. Our report found that, as is true for the federal sector, the growth in health-related costs is a primary driver of these fiscal challenges (see fig. 1). Two types of health-related costs are of particular concern at the state and local level: (1) Medicaid expenditures, and (2) the cost of health insurance for state and local government employees, retirees, and their beneficiaries. Retirement benefits consist primarily of two components: pensions and retiree health benefits. According to Census data, in fiscal year 2004-2005, state and local governments provided retirement benefits to nearly 7 million retirees and their families. In addition to supporting a secure retirement for state and local government employees and their families, such benefits constitute an important component of the total compensation package state and local governments offer to attract and retain the skilled workers needed to protect lives and health, and to promote the general welfare. These workers include highway patrol officers, local police, firefighters, school teachers, and judges, as well as general state and local government employees who staff the broad array of state and local agencies. Pension plans can generally be characterized as either defined benefit or defined contribution plans. In a defined benefit plan, the amount of the benefit payment is determined by a formula typically based on the retiree’s years of service and final average salary, and is most often provided as a lifetime annuity. In defined benefit plans for state and local government retirees, postretirement cost-of-living adjustments (COLA) are frequently provided. But benefit payments are generally reduced for early retirement, and in some cases, payments may be offset for receipt of Social Security. State and local government employees are generally required to contribute a percentage of their salaries to their defined benefit plans, unlike private sector employees, who generally make no contribution when they participate in defined benefit plans. According to a 50-state survey conducted by Workplace Economics, Inc., 43 of 48 states with defined benefit plans reported that general state employees were required to make contributions ranging from 1.25 to 10.5 percent of their salaries. Nevertheless, these contributions have no influence on the amount of benefits paid because benefits are based solely on the formula. In a defined contribution plan, the key determinants of the benefit amount are the employee’s and employer’s contribution rates, and the rate of return achieved on the amounts contributed to an individual’s account over time. The employee assumes the investment risk: The account balance at the time of retirement is the total amount of funds available, and unlike with defined benefit plans, there are generally no COLAs. Until depleted, however, a defined contribution account balance may continue to earn investment returns after retirement, and a retiree could use the balance to purchase an inflation-protected annuity. Also, defined contribution plans are more portable than defined benefit plans, as employees own their accounts individually and can generally take their balances with them when they leave government employment. There are no reductions based on early retirement or for participation in Social Security. Accounting standards governing public sector pensions were established by the Governmental Accounting Standards Board (GASB) in 1994. Comprehensive accounting and financial reporting standards governing other postemployment benefits (OPEB) in the public sector, such as health care, were issued in 2004 (superseding the interim standards issued previously). Implementation of the new OPEB standards is currently being phased in (see app. IV). The purpose of these standards is to prescribe accounting and financial reporting requirements that apply broadly to state and local government employers’ benefit plans. Reporting by employers and plan administrators helps keep the municipal bond market, taxpayers, elected public officials, plan members, and other interested parties informed about employers’ OPEB costs and obligations, and the operation and funded status of the plans. As with the Financial Accounting Standards Board (FASB) in the private sector, it is not the GASB’s function to enforce compliance with the standards it promulgates. Rather, the GASB functions as an independent standard setter, and its statements and interpretations constitute the highest source of generally accepted accounting principles (GAAP) for state and local governments, as specified in the Code of Professional Conduct of the American Institute of Certified Public Accountants. State and local governmental entities issue annual financial reports prepared in conformity with GAAP for a variety of reasons—such as to comply with general or specific state laws requiring GAAP financial reporting, or to protect the highest possible credit rating on the government’s bonds in order to reduce the government’s cost of borrowing. Compliance with GASB standards is necessary in order to obtain an independent auditor’s report that the financial statements are fairly presented in conformity with GAAP, and a failure to do so would result in a modification of the auditor’s report if the effects were material. Although the Employee Retirement Income Security Act of 1974 (ERISA) imposes participation, vesting, and other requirements directly upon employee pension plans, governmental plans such as those provided by state and local governments to their employees are excepted from these requirements. In addition, ERISA established an insurance program for defined benefit plans under which promised benefits are paid (up to a statutorily set amount), if an employer cannot pay them—but this too does not apply to governmental plans. However, for participants in governmental pension plans to receive preferential tax treatment (that is, for plan contributions and investment earnings to be tax-deferred), plans must be deemed qualified by the Internal Revenue Service. State and local governments typically provide their employees with retirement benefits that include a defined benefit plan, a supplemental defined contribution plan for voluntary savings, and group health coverage. However, the way each of these components is structured and the level of benefits provided varies widely—both across states, and within states based on such things as date of hire, employee occupation, and local jurisdiction. Most state and local government workers still are provided traditional pension plans with defined benefits. In 1998, all states had defined benefit plans as their primary pension plans for their general state workers except for Michigan and Nebraska (and the District of Columbia), which had defined contribution plans as their primary plans, and Indiana, which had a combined plan with both defined benefit and defined contribution components as its primary plan. Almost a decade later, we found that as of 2007, only one additional state (Alaska) had adopted a defined contribution plan as its primary plan; one additional state (Oregon) had adopted a combined plan, and Nebraska had replaced its defined contribution plan with a cash balance defined benefit plan. (See fig. 2.) Although still providing defined benefit plans as their primary plans for general state employees, some states also offer defined contribution plans (or combined plans) as optional alternatives to their primary plans. These states include Colorado, Florida, Montana, Ohio, South Carolina, and Washington. In the states that have adopted defined contribution plans as their primary plans, most employees continue to participate in defined benefit plans because employees are allowed to continue their participation in their previous plans (which is rare in the private sector). Thus, in contrast to the private sector, which has moved increasingly away from defined benefit plans over the past several decades, the overwhelming majority of states continue to provide defined benefit plans for their general state employees. Most states have multiple pension plans providing benefits to different groups of state and local government workers based on occupation (such as police officer or teacher) and/or local jurisdiction. According to the most recent Census data available, in fiscal year 2004-2005, there were a total of 2,656 state and local government pension plans. We found that defined benefit plans were still prevalent for most of these other state and local employees as well. For example, a nationwide study conducted by the National Education Association in 2006 found that of 99 large pension plans serving teachers and other school employees, 79 were defined benefit plans, 3 were defined contribution plans, and the remainder offered a range of alternative, optional, or combined plan designs with both defined benefit and defined contribution features. In addition to primary pension plans (whether defined benefit or defined contribution), data we gathered from various national organizations show that each of the 50 states has also established a defined contribution plan as a supplementary, voluntary option for tax-deferred retirement savings for their general state employees, and such plans appear to be common among other employee groups as well. These supplementary defined contribution plans are typically voluntary deferred compensation plans under section 457(b) of the federal tax code. (See table 1.) While these defined contribution plans are fairly universally available, state and local worker participation in the plans has been modest. In a 2006 nationwide survey conducted by the National Association of Government Defined Contribution Administrators, the average participation rate for all defined contribution plans was 21.6 percent. One reason cited for low participation rates in these supplementary plans is that, unlike in the private sector, it has been relatively rare for employers to match workers’ contributions to these plans, but the number of states offering a match has been increasing. According to a state employee benefit survey of all 50 states conducted by Workplace Economics, Inc., in 2006, 12 states match the employee’s contribution up to a specified percent or dollar amount. Among our site visit states, none made contributions to the supplementary savings plans for their general state employees, and employee participation rates generally ranged between 20 to 50 percent. In San Francisco, however, despite the lack of an employer match, 75 percent of employees had established 457(b) accounts. The executive director of the city’s retirement system attributed this success to several factors, including (1) that the plan had been in place for over 25 years, (2) that the plan offers good investment options for employees to choose from, and (3) that plan administrators have a strong outreach program. In the private sector, a growing number of employers are attempting to increase participation rates and retirement savings in defined contribution plans by automatically enrolling workers and offering new types of investment funds. State and local governments typically provide their active employees with health coverage, and they often pay the bulk of their premiums. According to the Workplace Economics, Inc., 2006 survey, on average, state employers paid over 90 percent of the cost for single employee coverage, and over 80 percent of the cost of family coverage, for active workers. Once workers retire, access to group coverage generally continues, but the extent of the employer contribution often declines, and different benefits are often provided depending on whether or not the retiree is eligible for Medicare. For virtually all state and local retirees age 65 or older, Medicare provides the primary coverage. Most state and large local government employers offer supplemental group health coverage, but do not always contribute to the cost of the premiums. According to the Workforce Economics, Inc., 2006 survey, all states but one provide access to such supplemental coverage. Only Nebraska provides no access to group coverage for retirees age 65 and over. In 12 states, retirees are provided access to coverage through a state health care program, but the state provides no support for the coverage. At the other end of the spectrum, in 16 states, employers pay the entire cost for at least one coverage plan under some circumstances. Of those states contributing to the premium costs, the maximum employer payments for employee-only coverage ranged from $40 per month (in Tennessee) to $850 per month (in Alaska). For state and local retirees who are under age 65 (that is, not yet Medicare-eligible), most state and large local employers provide the primary health care coverage. According to the Workplace Economics, Inc., 2006 survey, all states provide access to group health coverage for pre-Medicare retirees, but in 14 states, the plan participants pay the entire cost of the coverage (see fig. 3). In 14 other states, employers pay the entire cost for at least one coverage plan in some circumstances. Of those states providing an employer contribution, the maximum payments for retiree-only coverage ranged from $105 per month (in Oklahoma) to $850 per month (in Alaska). In most cases, states are continuing to provide retirees with prescription drug coverage following the rollout of the Medicare prescription drug program beginning in January 2006. In May 2006, the Segal Company, in cooperation with the Public Sector HealthCare Roundtable, conducted a survey of 109 state and local entities concerning retiree health care, and found that most of the public entities surveyed continued to provide prescription drug coverage to their retirees, and that only one entity planned to eliminate drug coverage entirely. Nationwide survey data indicate that while the vast majority of state and local government active workers participate in employer-sponsored health benefit plans, participation rates among retirees in these employer- sponsored health benefit programs are relatively low. According to data from the Department of Health and Human Services, in 2004, about 42 percent of state and local retirees participated in employer-sponsored health insurance programs. Among our site visit locations, we found that participation rates varied widely based on level of employer cost sharing. For example, in California, where the state may pay up to the full premium in some cases (depending on the retiree's date of hire, years of service, and choice of coverage plans); and in Michigan, where the state pays as much as 95 percent of the retirees’ premium for those under the defined benefit plan, we estimated participation rates to be approximately 70 percent and 90 percent of all state retirees, respectively. In contrast, in Oregon, where the state pays nothing toward retirees’ premiums for coverage under the pre-Medicare-eligible health care program administered by the Public Employees Benefit Board, it has been estimated that the participation rate among eligible retirees is about 30 percent. Beyond basic health care, other postemployment benefits (OPEB) that are sometimes offered to state and local government retirees include stand- alone supplemental dental or vision benefits, long-term care, or life insurance. When such benefits are made available, state and local government entities typically provide access to group rates, but the cost of the benefits is often paid primarily, if not entirely, by retirees. For example, among our site visit locations, postemployment benefits provided to retirees in addition to health care include the following: State employees in California generally have access to group term life insurance with a lump-sum benefit of $5,000, paid by the state. Retirees also are provided access to group dental benefits, which may be partially funded by the state in some cases, and a retiree vision program with premiums fully paid by retirees. Long-term care insurance is also available to all public employees in the state (active or retired), as well as their family members, generally as a fully member- paid program with no state contribution. In Michigan, dental and vision (as well as health) coverage is provided to general state employees at retirement. For those under the defined contribution plan (that is, hired on or after March 31, 1997), payments range from none for those with less than 10 years of service, to 30 percent of the premium cost for those with 10 years of service, plus 3 percent per year additional up to a maximum of 90 percent of the premium cost for those who have 30 or more years of service. The state also negotiated a group plan for long-term care insurance for active and retired workers, and their family members, but it is administered completely through a third party with no state support. Oregon’s other postemployment benefits for state retirees include group coverage for dental and vision benefits, but not life insurance. Long-term care insurance is also available, but only for some retirees. No employer contribution is provided for any of these benefits. How both pension plans and retiree health benefits are protected and managed is typically spelled out in statutes or in local ordinances, but these laws generally provide greater protections for pensions than for retiree health benefits. Laws protecting pensions are often anchored by provisions in state constitutions and local charters. Across the multiple plans providing benefits, state and local law typically requires that pensions be managed as trust funds and overseen by boards. In contrast, state and local law provides much less protection for retiree health benefits. Retiree health benefits are generally treated as an operating expense for that year’s costs on a pay-as-you-go basis and managed together with active employee benefits. State and local laws generally provide the most direct source of any specific legal protections for the pensions of state and local workers. Provisions in state constitutions often protect pensions from being eliminated or diminished. In addition, constitutional provisions often specify how pension funds are to be managed, such as by mandating certain funding requirements and/or requiring that the funds be overseen by boards of trustees. Moreover, we found that at the sites we visited, locally administered plans were generally governed by local laws. However, state employees, as well as the vast majority of local employees, are covered by state-administered plans. Protections for pensions in state constitutions are the strongest form of legal protection states can provide because constitutions—which set out the system of fundamental laws for the governance of each state— preempt state statutes and are difficult to change. Furthermore, changing a state constitution usually requires broad public support. For example, often a supermajority (such as three-fifths) of a state’s legislature may need to first approve changes to its constitution. If a change passes the legislature, voters typically must approve it before it becomes part of the state’s constitution. The majority of states have some form of constitutional protection for their pensions. According to AARP data compiled in 2000, 31 states have a total of 93 constitutional provisions explicitly protecting pensions. (The other 19 states all have pension protections in their statutes or recognize legal protections under common law.) These constitutional pension provisions prescribe some combination of how pension trusts are to be funded, protected, managed, or governed. (See table 2.) In nine states, constitutional provisions take the form of a specific guarantee of the right to a benefit. In two of the states we visited, the state constitution provided protection for pension benefits. In California, for example, the state constitution provides that public plan assets are trust funds to be used only for providing pension benefits to plan participants. In Michigan, the state constitution provides that public pension benefits are contractual obligations that cannot be diminished or impaired and must be funded annually. The basic features of pension plans—such as eligibility, contributions, and types of benefits—are often spelled out in state or local statute. State- administered plans are generally governed by state laws. For example, in California, the formulas used to calculate pension benefit levels for employees participating in the California Public Employees’ Retirement System (CalPERS) are provided in state law. Similarly, in Oregon, pension benefit formulas for state and local employees participating in the Oregon Public Employees Retirement System (OPERS) plans are provided in state statute. In addition, we found that at the sites we visited locally administered plans were generally governed by local laws. For example, in San Francisco, contribution rates for employees participating in the San Francisco City and County Employees’ Retirement System are spelled out in the city charter. Legal protections usually apply to benefits for existing workers or benefits that have already accrued; thus, state and local governments generally can change the benefits for new hires by creating a series of new tiers or plans that apply to employees hired only after the date of the change. For example, the Oregon legislature changed the pension benefit for employees hired on or after January 1, 1996, and again for employees hired on or after August 29, 2003, each time increasing the retirement age for the new group of employees. For some state and local workers whose benefit provisions are not laid out in detail in state or local statutes, specific provisions are left to be negotiated between employers and unions. For example, in California, according to state officials, various benefit formula options for local employees are laid out in state statutes, but the specific provisions adopted are generally determined through collective bargaining between the more than 1,500 different local public employers and rank-and-file bargaining units. In all three states we visited, unions also lobby the state legislature on behalf of their members. For example, in Michigan, according to officials from the Department of Management and Budget, unions marshal support for or against a proposal by taking such actions as initiating letter-writing campaigns to support or oppose legislative measures. In accordance with state constitution and/or statute, the assets of state and local government pension plans are typically managed as trusts and overseen by boards of trustees to ensure that the assets are used for the sole purpose of meeting retirement system obligations and that the plans are in compliance with the federal tax code. Boards of trustees, of varying size and composition, often serve the purpose of establishing the overall policies for the operation and management of the pension plans, which can include adopting actuarial assumptions, establishing procedures for financial control and reporting, and setting investment strategy. On the basis of our analysis of data from the National Education Association, the National Association of State Retirement Administrators (NASRA), and reports and publications from selected states, we found that 46 states had boards overseeing the administration of their pension plans for general state employees. These boards ranged in size from 5 to 19 members, with various combinations of those elected by plan members, those appointed by a state official, and those who serve automatically based on their office in state government (known as ex officio members). (See fig. 4.) Different types of members bring different perspectives to bear, and can help to balance competing demands on retirement system resources. For example, board members who are elected by active and retired members of the retirement system, or who are union members, generally help to ensure that members’ benefits are protected. Board members who are appointed sometimes are required to have some type of technical knowledge, such as investment expertise. Finally, ex officio board members generally represent the financial concerns of the state government. Some pension boards do not have each of these perspectives represented. For example, boards governing the primary public employee pension plans in all three states we visited had various compositions and responsibilities. (See table 3.) At the local level, in Detroit, Michigan, a majority of the board of Detroit’s General Retirement System is composed of members of the system. According to officials from the General Retirement System, this is thought to protect pension plan assets from being used for purposes other than providing benefits to members of the retirement system. Regarding responsibilities, the board administers the General Retirement System and, as specified in local city ordinances, is responsible for the system’s proper operation and investment strategy. Pension boards of trustees typically serve as pension plan fiduciaries, and as fiduciaries, they usually have significant independence in terms of how they manage the funds. Boards make policy decisions within the framework of the plan’s enabling statutes, which may include adopting actuarial assumptions, establishing procedures for financial control and reporting, and setting investment policy. In the course of managing pension trusts, boards generally obtain the services of independent advisors, actuaries, or investment professionals. Also, some states’ pension plans have investment boards in addition to, or instead of, general oversight boards. For example, three of the four states without general oversight boards have investment boards responsible for setting investment policy. While public employees may have a broad mandate to serve all citizens, board members generally have a fiduciary duty to act solely in the interests of plan participants and beneficiaries. Likely at least partially because of this specific duty, one study of approximately 250 pension plans at the state and local level found that plans with boards overseeing them were associated with greater funding than those without boards. When state pension plans do not have a general oversight board, these responsibilities tend to be handled directly by legislators and/or senior executive officials. For example, in the state of Washington, the pension plan for general state employees is overseen by the Pension Funding Council—a six-member body whose membership, by statute, includes four state legislators. The council adopts changes to economic assumptions and contribution rates for state retirement systems by majority vote. In Florida, the Florida Retirement System is not overseen by a separate independent board; instead, the pension plan is the responsibility of the State Board of Administration, composed of the governor, the chief financial officer of the state, and the state attorney general. In New York, the state comptroller, an elected official, serves as the sole trustee and administrative head of the New York State and Local Employees Retirement System. In contrast with pensions, there are less likely to be statutory protections applicable to retiree health benefits. To the extent that any such legal protections exist, they more frequently stem from the negotiated agreements between unions and government employers. In addition, the cost of annual retiree health benefits typically have been treated as an operating expense and managed together with active employee benefits, although the benefits offered retirees may differ from those offered active employees. Despite the general absence of a fund to manage, retiree health programs frequently still have boards that help to determine the terms of the health plans to be offered. Unlike the law governing pensions, the law governing retiree health benefits for state and local government workers generally does not include the same type of explicit protections. To the extent retiree health benefits are legally protected, it is generally because they have been collectively bargained and are subject to current labor contracts. In cases where reductions to retiree health benefits are challenged in court, the ultimate outcome depends on the specific facts and circumstances and the applicable state and/or local law in each jurisdiction. In Segal’s 2006 survey of over 100 state and local plans, 62 percent of respondents said that statutory or regulatory obligations affected their ability to change retiree health coverage; 25 percent said that retiree health coverage was subject to collective bargaining; and 17 percent said that other factors affected their ability to change retiree health coverage. In two recent cases, however, the courts have upheld the state’s right to modify retiree health benefits (see sidebar). In 2000, Michign increased the co-pyment nd dedctile to e pid nder it helth pln for public chool retiree, nd retiree sued. The te supreme cort held tht retiree helth enefit were not ccred finncienefit within the mening of the Michign Contittion nd tht the te eablihing the pln did not crete contrctual right to such enefit. Studier v. Mich. Pub. Sch. Emples. Ret Bd., 472 Mich. 642 (2005) Retiree health benefits generally have been treated by state and local governments as an operating expense for that year’s costs on a pay-as-you- go basis. State and local governments typically do not set aside funds while employees are working to pay their future retiree health benefits. Moreover, retiree health benefits are mostly managed together with active employee benefits, although the actual benefits offered to retirees and to active employees may be different. In most cases, retiree health benefits are administered under the state or local employee benefit system. Despite the general absence of a fund to manage, the administrators of retiree health benefits may still look to boards to help determine the health coverage to be offered. For example, in California, the same CalPERS board that oversees the pension fund also oversees a health care program. With respect to this health care program, the CalPERS board is responsible for selecting insurers through which participants can receive coverage. The CalPERS board negotiates, for example, the specific services covered, premiums, and participant co-payments. Although many local governments participate in the CalPERS program, the City and County of San Francisco has chosen to administer its own separate program. The Health Service System (a city department) is responsible for administering the benefits for both active and retired employees, with oversight from the Health Service Board (a city board). The Health Service Board is charged with establishing rules and regulations for the Health Service System and for conducting an annual review of the costs for medical and hospital care. In Oregon, the Public Employees Benefit Board, a separate entity from OPERS, is responsible for managing the health benefits of both active and pre-Medicare-eligible retired employees, with authority to negotiate the terms of their coverage. While state and local governments generally have strategies to manage future pension costs, they have not yet developed strategies to fund future health care costs for public sector retirees. We analyzed the state and local sector’s fiscal outlook with respect to the sector’s ability to maintain current retiree benefits—that is, the sector’s ability to fund its future liabilities—from two perspectives and came to similar conclusions. First, in our simulation of the fiscal outlook for the state and local sector, we developed projections of the likely cost of pensions and retiree health benefits that already have been and will continue to be earned by employees. Our simulation shows that the additional pension contributions that state and local governments will need to make in future years to fully fund their pensions on an ongoing basis are only slightly higher than the current contribution rate. Our simulation also shows that health care costs for retirees will likely rise considerably as a component of state and local budgets, if these costs continue to be funded on a pay-as- you-go basis. Second, we analyzed data on the funded status of 126 of the nation’s largest public sector retirement systems and found that with some notable exceptions, most are relatively well funded, but that long-term strategies to fund future health care costs for retirees are generally lacking. Our simulation indicates that state and local governments, in aggregate, will need to make contributions to pension systems at a somewhat higher rate than in recent years in order to fully fund their pension obligations on an ongoing basis. Assuming certain historical trends continue and that there is a steady level of pension contributions in the future, contribution rates would need to rise to 9.3 percent of salaries—less than a half percent more than the 9.0 percent contribution rate in 2006. Our model is based on a variety of assumptions regarding employee contributions, future employment, retirement, wages, rates of return, pension characteristics, and other factors. For example, our analyses relate to defined benefit plans only. (For details on our assumptions and our model, see app. II.) We assume that employee contribution rates to these pension funds will remain the same, relative to wages, as in the past. We also assume that in the future, the real rate of return on pension assets will be about 5 percent, which is based on the real returns on various investment instruments over the last 40 years. Our findings regarding the required contribution to pension funds on an ongoing basis were, however, extremely sensitive to assumptions about the future rate of return on invested pension funds. Some economists and financial analysts have expressed concern that returns in the future may not be quite as high as those in the past. Future investment returns may not match past returns because, for example, slower labor force growth may lead to slower economic growth, which may, in turn, reduce investment returns. Also, pension managers may choose to invest in less risky, lower-return investments in the future. If future rates of return are more or less in line with historic experience, then our simulation should provide a reasonable estimate of the contribution rates that will be needed in the future. But if future rates of return decline, then contribution rates would need to be higher than 9.3 percent of salaries, as indicated by our base case simulation results. (See table 4.) Moreover, the results for individual state and local governments may vary substantially. Our simulation indicates that projected costs for retiree health benefits, while not as large a component of state and local government budgets as pensions, will more than double as a percentage of salaries over the next several decades, if these costs continue to be funded on a pay-as-you-go basis. In 2006, these costs amounted to approximately 2.0 percent of salaries, but according to our simulation, by 2050, they will grow to 5.0 percent of salaries—a 150 percent increase. The key reason for this substantial increase is the more general rise in health care costs, which, if left unconstrained, will continue to cause costs to rise as a percentage of salaries. As with the projections of necessary pension contributions, our estimates of retiree health benefit costs are also dependent on certain assumptions, and are particularly sensitive to assumptions about the growth in health care costs. For example, on the basis of research and discussions with experts, we assumed that health care costs would grow at a higher rate than the growth in the nation’s gross domestic product (GDP). If health care costs were to rise only at the same rate as GDP, then by 2050, our projected costs would grow only from 2.0 percent to 2.9 percent of salaries, instead of 5.0 percent. Also, because our model is based on data that did not incorporate possible savings attributable to the Medicare Part D drug subsidy that began in 2006, the estimates may slightly overstate retiree health costs. However, if health care costs were to rise more rapidly than they have over the past 35 years, then the cost of retiree health benefits would exceed our projected costs of 5.0 percent of salaries. (See table 5.) State and local governments typically set aside funds to finance the cost of future pension obligations and use a variety of strategies to keep the funding status of their plans on track. Funding status is a measure that captures a government’s ongoing effort at one point in time to prefund its future pension liability, generally expressed as the ratio of assets to liabilities (also referred to as the funded ratio). Assessing the funding status of public sector pension plans provides a second perspective on the fiscal outlook of state and local government efforts to fund future pension benefits. According NASRA’s Public Fund Survey as of 2007, the most recent reports from 126 of the largest state and local pension plans in the country indicate that over three-fifths of the plans were at least 80 percent funded—a level generally viewed as being acceptable to support future pension costs. However, funding levels across the different plans ranged from about 32 to 113 percent. (See fig. 5.) Those state and local governments with plans that are funded below acceptable levels may face tough choices in the future between the need to raise taxes, cut spending, or reduce benefits in order to meet their obligations. A primary way state and local governments keep the funding status of their pension funds on track is to make their actuarially required contributions. There are three sources of revenues for pension benefits: investment earnings, employee contributions, and employer contributions. Investment earnings provide the major source of funding (see fig. 6). The amount that employees are required to contribute is generally fixed by state statute as a percentage of salary, while state and local governments determine the level of employer contributions based on their plans’ funding status—that is, the extent to which liabilities already accrued are funded. Actuaries calculate the contribution amount needed to cover the liability that accrues each year and to pay an installment on any unfunded liability. If a plan sponsor (that is, a state or local government employer) is making these actuarially required contributions, the plan can have a funded ratio below 100 percent yet still be on track toward full actuarial funding. Governments use various strategies to help them make their actuarially required pension fund contributions. One strategy that governments use to lessen the volatility of fluctuations in their actuarially required contribution is to average the value of plan assets over a number of years (referred to as “smoothing”). For example between 1999 and 2005, California’s contribution rate for one of CalPERS’ pension plans ranged from 1.5 percent of salaries to 17 percent of salaries. In 2005, California began using smoothing techniques, and the contribution rate over the last 2 years changed only slightly—from 15.9 percent of salaries in 2006 to an estimated 15.7 percent in 2007. Another strategy government sponsors use to control their pension fund contribution rates is to implement new, less costly benefit levels for newly hired employees. Plan sponsors create a new “tier,” with different benefits, for all employees hired after the date the new tier goes into effect. For example, New York has four tiers in its State and Local Retirement System, based on an employee’s occupation and date of hire. General employees in tier 1 (hired before July 1, 1973) can retire at age 55 after 20 years of service with no reduction for early retirement. However, general employees in tier 3 (hired between July 26, 1973, and September 1, 1983) must be age 62 with 5 years of service or age 55 with 30 years of service to retire with no reduction in benefit. In addition to creating new tiers within the same pension plan, government sponsors can also lower costs by adopting entirely new plans for future hires. For example, Alaska recently switched from its previous defined benefit plans to defined contribution plans for all general public employees and teachers hired on or after July 1, 2006. According to the state’s 2006 comprehensive annual financial report, the new pension system was adopted to help stabilize contribution rates for all public employers within the state. Also, in 2003, Oregon adopted a new program with both a defined benefit and a defined contribution component as its primary plan for public employees. Under the new program, Oregon continues to provide a defined benefit funded by employer contributions (with a lower benefit formula for new employees), while the employees’ contributions are now placed in individual accounts with no state matching (the defined contribution component). Oregon officials estimate that its pension reforms save public employers over $400 million per year. Yet another strategy plan sponsors use to manage their costs is to seek higher contribution rates from its employees. For example, in 2005, Louisiana enacted legislation to raise the employee contribution rate for general state employees participating in the State Employees’ Retirement System hired on or after July 1, 2006, from 7.5 to 8.0 percent. Finally, another strategy that some plan sponsors have used as part of an overall strategy for managing pension costs is to issue bonds to reduce their unfunded actuarial liabilities. If the interest rate on the bond is less than the rate of return earned on pension assets, sponsors can achieve some savings. For example, in 2005, Detroit issued $1.44 billion in bonds to pay down the unfunded accrued actuarial liabilities of its two retirement systems. Similarly, Oregon recently issued pension obligation bonds to help reduce its employer contribution rate for the Public Employees Retirement System. According to officials from Oregon’s Legislative Fiscal Office, by issuing the pension bonds, they were paying a lower interest rate on the debt service for the bonds (about 5.75 percent) than they were currently earning on the bond proceeds. OPERS officials said that earnings on the bond proceeds have averaged over 15 percent over the last 4 years. However, it should be noted that issuing bonds to make the employer contribution increases the government’s overall exposure to financial risks to the extent that the bond proceeds are invested in equities or highly leveraged portfolios for returns to exceed the borrowing costs. Also, if rates of return were to move lower than the bond rates, state and local governments would no longer realize an advantage to having issued the bonds, because the rate they could earn on the proceeds may no longer cover the debt service costs. Public pension plan funding levels are sensitive to a variety of external influences, such as the rate of return on the funds’ investments, the annual stream of contributions to the fund, and changes to the levels of benefits that ultimately affect future liabilities. Although strategies are being used to keep the funding of most plans on track, we found some notable exceptions where the failure to use such strategies caused the funding status to drop significantly. Over time, state and local governments could be faced with the need to raise taxes, cut spending, or reduce benefits in order to meet their obligations. As investment earnings are the major source of pension funding, timely payment of contributions is key to maximizing the compound interest earned. However, sometimes a combination of factors makes it difficult for state and local governments to make their actuarially required contribution, and funding levels can drop. For instance, the sharp and prolonged decline in the stock market that occurred in the early 2000s reduced the value of many plans’ assets and increased the amount many states and local governments needed to contribute to remain on track toward full funding. Furthermore, to the extent state and local governments experience slower economic growth, revenues might not keep up with expenditures, making it difficult for the governments to meet their funding commitments for pensions. For example, from 2001 to 2007, Michigan’s contribution rate for the State Employees’ Retirement System (MSERS) dramatically increased—from 4.7 percent to 18.1 percent of payroll. During this period of slow revenue growth, Michigan used money transferred from a pension fund subaccount to supplement the amount it contributed to MSERS to make its full actuarially required contribution. Even so, from 2002 through 2005, MSERS’s funded ratio dropped steadily from 98.7 percent to 79.8 percent. In some cases, employers fell short of making their actuarially required contributions at the same time that they adopted significant increases in pension benefits for their employees, and did so for years. For example, a New Jersey state treasury department official told us that in 1997, the state viewed the status of its pension funds as “overfunded,” and began substituting “excess” pension assets for their actuarially required contributions. The state skipped payments to the retirement plans over a 7-year period, totaling $8 billion. While in this “overfunded” position, the state also approved costly benefit enhancements and early retirement packages. According to the official, as a result of these enhancements and less than prudent funding arrangements, compounded by the downturn in the market conditions beginning in 2001, the funded ratios of several New Jersey pension plans fell below acceptable levels. For instance, since 1999, the funded ratio of New Jersey’s Public Employees’ Retirement System declined from 113.5 to 79.1 percent as of June 30, 2005. Overall, the state now faces an $18.9 billion unfunded liability for all of its retirement plans combined. Similarly, in San Diego, the city did not make its actuarially required contribution to the San Diego City Retirement System by about $80 million from 1999 through 2004. At the same time, the city increased benefits to current employees, and in a litigation settlement, increased benefits to current retirees. As of June 30, 2006, the actuarial valuation report for the system stated that the funding status had dropped from 97.3 percent in 2000 to a low of 65.8 percent in 2004, with an unfunded liability of $1.37 billion. However, as of 2006, the system had regained ground up to 79.9 percent, with an unfunded liability of about $1.0 billion. Most state and local governments generally lack long-term strategies to address future health care costs. In addition, many of the governments are still in the process of responding to the new GASB statement calling for valuations of the liability for the future cost of other postemployment benefits (OPEB), including health care benefits for retirees, as the standard is being implemented in phases. Officials for the governments we visited said that once the valuations were completed, they would consider options for addressing these costs, if needed. Several funding vehicles are available under the federal tax code to help facilitate state and local government efforts to accumulate funds to meet their future health care liabilities. (See table 6.) As noted earlier, of the state and local governments that contribute to retiree health benefits, most treat the cost of the benefits as an operating expense and do not prefund the future obligation. Of the states that provide an explicit contribution to the premiums for retiree health coverage, it has been reported that 13 partially prefund their future health care costs. But these prefunding efforts have been slow to get started. For example, in 1989, the Connecticut Teacher’s Retirement System created a Health Insurance Premium Account, using a 1 percent of salary contribution from active teachers to fund health benefits for retirees. The fund was facing insolvency by 1999. To address the shortfall, in 2004, Connecticut increased active teachers’ contributions to the fund from 1 percent to 1.25 percent of salary. In Michigan, state budget officials said that they would like to prefund retiree health care benefits for state employees, but other state priorities have prevented them from doing so. However, a fund was recently set up for local employees in Michigan. In 2004, the Municipal Employees’ Retirement System of Michigan created a Retiree Health Funding Vehicle to allow municipalities to contribute to a trust fund for retiree health benefits. As of September 2007, system officials reported that 55 employers were participating in the program, and that the fund had over $95 million available for retiree health care costs. More recently, in March 2007, CalPERS launched the California Employers’ Retiree Benefit Trust Fund, an investment vehicle that allows public employers that contract with CalPERS for employee health benefits to prefund their future OPEB costs. At the sites we visited, state and local government officials we spoke with said that the rising cost of health care was one of the biggest fiscal challenges confronting them in the near term. They said the drivers of their health care costs mirror those of the nation as a whole: rapidly escalating costs for prescription drugs, medical care, and hospital care. Further, they noted that the health care industry’s practice of shifting costs not paid by the Medicare and Medicaid programs to employers is causing employers’ costs for health insurance premiums to rise even faster. In addition to the costs associated with providing health care benefits for their active and retired workers, states also must contend with rising costs for their uninsured residents and federal changes to Medicaid. Officials who administer health benefits for California state and local governments noted that much of the cost increase for the health care market is due to health care inflation and demographic factors that are outside of their control. At the same time, with respect to health care, there are also factors that are within their control to help manage these costs, such as their program’s benefit design and eligibility criteria. Aside from prefunding through establishment of a trust, several states have taken steps to address escalating costs of retiree health benefits by negotiating lower premium costs and/or reducing benefits. For example, as in the private sector, some public employers have negotiated lower premiums by increasing the deductibles, co-payments, and coinsurance that employees must pay out of pocket. In addition, several states have introduced requirements that employees must work a certain number of years before becoming eligible for various levels of retiree health benefits. California introduced such vesting requirements for partially paid retiree health benefits for workers hired in 1985 and thereafter; and Michigan introduced similar requirements in 1997. In 2006, North Carolina enacted legislation requiring that employees hired after February 1, 2007, must have 20 years of service to be eligible for retiree health benefits. Other states have reduced the benefits provided and/or instituted health savings accounts. Oregon has discontinued its retiree health care support for those hired since 2003. Also, to reduce state costs, Utah recently discontinued its policy of providing retirees a month of health insurance for every day of unused sick leave (a policy initiated when health insurance costs were substantially lower). Instead, Utah now deposits wage amounts equal to unused sick leave into health savings accounts that retirees can use to purchase their own health insurance. State and local governments indicated that they may take a range of actions in response to the new GASB standards. At the locations we visited, all the officials we spoke with said that their governments were planning to comply with the new standards and report their liability for retiree health benefits. However, while various options were being discussed, none of the officials we spoke with said that their governments had developed plans to address their unfunded liabilities. In California, for example, the governor had established a 12-member Public Employee Post-Employment Benefits Commission to propose ways to address the state's growing postemployment benefits and retiree health care obligations, with a recommended plan due by January 1, 2008. According to Oregon retirement system officials, their state had also formed a workgroup to study options related to GASB 45. In Detroit, the city budget director said that city officials would wait to find out if any practices emerge that gain wide support before deciding their next steps. San Francisco is also taking a wait-and-see approach with respect to devising a strategy for dealing with the unfunded liability. A senior city official said that the city wants to have several years’ experience estimating the unfunded liability to feel confident that the estimates are valid before negotiating any remedies with the unions. Otherwise, he noted, if the costs end up being greater than anticipated, it could be difficult to reopen negotiations with the unions and the city would then have to deal with the greater costs on its own. Across the state and local government sector, the ability to maintain current levels of public sector retiree benefits will depend, in large part, on the nature and extent of the fiscal challenges these governments face in the years ahead. While public sector workers have thus far been relatively shielded from many of the changes that have occurred in the private sector, provisions that lend stability for public sector pensions and retiree health benefits are subject to change. Pension benefits are often protected by state constitutions and city charters, but these protections can be amended if voters feel the need to rebalance priorities as fiscal pressures increase. In fact, our recent work on state and local government fiscal conditions indicated that persistent fiscal challenges will likely emerge within the next decade. Retiree health benefits are generally easier to change simply through the annual budget process. As we heard from some state officials, the impetus for changing retiree benefits often surfaces when the projected costs for these benefits starts to grow faster than expected. When this occurs, governments may eventually have little choice but to reduce future benefits or raise taxes. One way state and local governments can address unexpected gaps in funding is to prefund the promised benefits. Even though our simulation suggests that the sector as a whole is generally on track with funding its pension obligations, continued diligence will be necessary to ensure that funding is adequate in the future. When state and local governments take breaks from their regular contribution schedules, such as when investment returns are high, they may be putting their ability to pay future retiree benefits at risk. According to our simulation for state and local governments, to ensure that they have the resources they need to meet future costs, they will have to maintain (and as a sector, increase slightly) their contributions to their pension funds. Moreover, our long-term projections indicate that if future returns turn out lower than expected, governments may need to ratchet up their contributions substantially. The provision of retiree health benefits presents an entirely different scenario. Given that our simulations show that over the next several decades, the cost of providing health care benefits for public sector retirees will more than double as a share of salaries, state and local governments may find it difficult to maintain current benefits levels. It is clear from our model and from discussions with budget officials that health care inflation is driving these future costs. Budget officials with whom we spoke said that they will face challenges financing future health care benefits in general—including Medicaid benefits and health benefits for active government employees, not just for their retirees. As state and local governments begin to comply with GASB reporting standards, information about the future costs of the retiree health benefits will become more transparent. Policy makers, voters, and beneficiaries can use this new information to begin a debate on ways to control escalating health care costs, the appropriate level of future benefits to be provided to public sector retirees, and who should pay for them. We provided officials from the Internal Revenue Service with a draft of this report. These officials provided us with informal technical comments that we have incorporated in the report, where appropriate. In addition, we provided GASB officials and officials from the states and cities we visited with portions of the draft report that addressed aspects of the pension funds and retiree health benefit programs in their jurisdictions. They, too, provided us with comments that we incorporated in this report, where appropriate. Finally, we also benefited from comments provided by two external reviewers knowledgeable about the subject area. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to relevant congressional committees, the Acting Commissioner of Internal Revenue, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please call me at (202) 512-7215. Key contributors are listed in appendix V. One of the primary costs of state and local governments is the salaries and benefits of employees, and part of those costs are the pensions and other postemployment benefits of retirees of state and local governments. This appendix provides information on the development of simulations of future pension and health care costs for retirees of state and local governments. These analyses are part of a larger GAO effort that examines the potential fiscal condition of the state and local sector for many years into the future, and are an aggregate analysis of the entire state and local sector—no individual governments are examined. This appendix provides information on (1) the development of several key demographic and economic factors such as future employment, retirement, and wages for the state and local workforce that are necessary for the simulations of future pension and retiree health care costs; (2) how we project the necessary contribution rate to pension funds of state and local governments; and (3) how we project the future yearly pay-as-you-go costs of retiree health benefits. Key underlying information for the pension and health care cost simulations relates to future levels of employment, retirees, and wages. In particular, to understand the postretirement promises that the sector has and will continue to make, we need to project the number of employees and retirees in each future year, as well as the dollar value of pension benefits that will be earned and the extent to which those benefits will be funded through employee contributions to pension funds. These analyses relate to defined benefit plans only. We project the following key factors for each year during the simulation time frame: (1) the number of state and local government employees, (2) the state and local government real wages, (3) the number of pension beneficiaries, (4) average real benefits per beneficiary, and (5) yearly employee contributions to state and local government pension plans. 1. Steps to Project Future Employment Levels Future growth in the number of state and local government retirees— many of whom will be entitled to pension and health care benefits—is largely driven by the size of the workforce in earlier years. To project the level of employment in each future year, we assume that state and local employment grows at the same rate as total population under the intermediate assumptions of the Old-Age and Survivors Insurance and Disability Insurance (OASDI) Trustees. The implication of these assumptions is that the ratio of state and local employment to the total population remains constant. The Trustees assume that population growth gradually declines from 0.8 percent during the next decade to a steady rate of 0.3 percent per year beginning in 2044. Accordingly, state and local government employment displays the same pattern in our projections. The relationship used to project total state and local government employment (egslall) is shown in equation 1: np is population in the indicated year egslall is the number of state and local employees in the indicated year 2. Steps to Project Future State and Local Government Real Wage The pension benefits that employees become entitled to are a function of the wages they earned during their working years. As described below, we developed a rolling average real wage index for different cohorts of workers to estimate the average real pension benefit of the recipient pool in each future year. First, we assume that the real employment cost index for the state and local sector (jecistlcr) will grow at a rate equal to the difference between the Congressional Budget Office (CBO) assumptions for the growth in the employment cost index (ECI) for private sector wages and salaries and inflation as measured by the consumer price index for all urban consumers (CPIU), as published in the January 2007 CBO Budget and Economic Outlook. These data are available through 2017. For later years, we hold the growth rate constant at the rate that CBO assumes between 2016 and 2017. CBO’s assumptions for growth in the ECI and the CPIU are 3.3 percent and 2.2 percent per year respectively, implying a real wage growth of 1.1 percent per year during the simulation time frame. Since the analysis is scaled to the real wage bill over the simulation time frame, we calculate that aggregate amount for each future year. As shown in equation 2, aggregate real wages are assumed to grow at the combined rate of growth in the real employment cost index (jecistlcr) and employment (egslall). where: jecistler is the real employment cost index in a given year gsclwageallr is the real wage bill of the state and local sector As noted previously, population growth slows from 0.8 percent in the upcoming decade to a steady rate of 0.3 percent after 2044. Because population growth drives employment in our projections, this slowdown implies that aggregate real wage growth slows from 1.9 percent per year to a steady long-run rate of 1.4 percent. 3. Steps to Project Growth in the Number of Pension Beneficiaries While actuaries use detailed information and assumptions regarding the age, earnings, service records, and mortality rates applicable to the entities they evaluate, information in such detail is not available for the state and local government sector as a whole. This lack of detailed data necessitated the development of a method of projecting aggregate state and local beneficiary growth that is much simpler than the methods that actuaries employ. The method we developed reflects the logic that each year’s growth in the number of beneficiaries is linked to past growth in the number of employees. Total state and local government employment from 1929 through 2005 was obtained from the National Income and Product Accounts (NIPA) tables 6.4a, b, c, and d. The Census Bureau provided a continuous series of data on the number of state and local pension beneficiaries from 1992 through 2005 during which continuous observations were available. Cyclical swings in the employment series were removed using a Hodrick- Prescott filter. Then, both the employment and beneficiary series were logged and first-differenced, transforming the data from levels to proportionate changes. We developed a routine that searched across 45 years of lagged employment growth to select a set of weights for the years in which past employment growth best explained a given year’s growth in beneficiaries. The routine included the restrictions that the weights must be non-negative and sum to 1. The method produced the relationship shown in equation 3, where beneficiaries is equal to the state and local pension benefit recipients, eglsall is state and local employees, and the coefficients are weights, derived from the estimation, that reflect the contribution of a particular past year’s employment change in explaining a given year’s change in retirees. In particular, the estimated relationship suggests that beneficiary growth in a given year is largely determined by employment growth 34, 21, 22, and 23 years prior to the given period. This pattern appears consistent with the categories of workers that the sector employs. Many fire and police positions, for example, offer faster pension accrual or early retirement due to the physical demands and risks of the work, while many other state and local workers have longer careers. ) d. 560 d. ) ) log d. d. ) ) where: beneficiaries is the number of retirees receiving pensions in the indicated year. 4. Steps to Project Real Benefits per Beneficiary While, in the long run, the average real benefit level should grow at the same rate as real wages—that is, at 1.1 percent per year—in the first decades of the projection the average real benefit will be affected by real wage changes that occurred before the projection period. Accordingly, we developed a relationship that reflects how the average real benefit level will change over time according to changes in the number and average real benefit level of three subsets of the retiree population: (1) new retirees entering the beneficiary pool, (2) new decedents leaving the pool, and (3) the majority of the previous year’s retirees who continue to receive benefits during the given period. Each group’s real benefit is linked to the real wage level in the average year of retirement for that group. Thus, to determine the average real benefit overall in any future year, we need weights and real wage indexes for the three groups that can be used to develop a rolling average real wage of the recipient pool in each future year. Equation 3 above projects the percentage change in the total number of beneficiaries between two successive years, but this difference is actually composed of two elements: the percentage change in new retirees minus the percentage change in decedents. Therefore, to determine the weight for new retirees, we also need an estimate of the number of new decedents in each year. In order to estimate a “death rate,” we utilize Social Security Administration data on terminated benefits and total OASDI recipients, which excludes disability recipients. Our estimate of the “death rate” for the forecast period is assumed to be equal to the number of terminated Social Security recipients divided by the total number of OASDI recipients in 2003 (3.67 percent). This analysis then enables a derivation of weights for each of the three groups as follows: weight for new retirees: the number of beneficiaries this year, less the number of beneficiaries last year who are still alive, divided by the number of beneficiaries this year ( ) weight for continuing recipients is equal to last year’s beneficiaries divided by this year’s beneficiaries ( ) weight for the deceased is the death rate (3.67 percent) multiplied by last year’s beneficiaries divided by this year’s beneficiaries ( ) Mathematically the weights are calculated as follows: 0367 . Next, we need to identify the real employment cost index that determines the real benefit level for each of these three groups. We do so by estimating the average retirement year applicable to each of the three groups. First, we assume the average retirement age is 60. We developed this estimate based on an analysis of the March Supplement to the Current Population Survey (CPS) for 2005-2006, which indicated that the average state and local government retiree had retired at 60 years of age. We also analyzed detailed data on the age distribution of OASDI recipients provided by the Office of the Actuary of the Social Security Administration. These data showed that the average age for new decedents is about 81 during the initial years of OASDI’s simulations, and we thus used a 21-year lag—81 minus 60—to estimate the real wage applicable to this group. For the newly retired group, we use the current year’s employment cost index. For the remaining retirees—those already retired and remaining in the group—we use information from CPS for 2005 that indicated that the average age of a retired state or local retiree was 68. Therefore, we apply an 8-year lag to the real employment cost index to determine real benefits of this group. Using the weights shown in equation 4a and the appropriate periods’ values for the real employment cost index (jecistlcr), the rolling average jecistlcr is constructed as follows: where: wjecistlcr is the rolling average employment cost index for retirees in year t. This equation approximates the average employment cost index at retirement of the retiree pool in a given year. To do this we take the employment cost index 8 years prior to the given year and weight this by the portion of the total retirees in the given year who were already retired last year. We add to this a factor to account for new retirees who have a higher employment cost index because they just retired. Finally, because some of the retirees from last year have deceased, the first factor overstates the number of retirees, and therefore we subtract a factor for those who have died, weighted by the cost index 21 years ago, when, on average, this group entered retirement. The purpose of the pension simulations is to estimate the level of contribution that state and local governments would need to make each year going forward to ensure that their pension systems are fully funded on an ongoing basis. In the previous section we calculated a variety of critical demographic and economic factors that are necessary for this analysis. In the following section, we describe our basic formulation and sensitivity analysis for employer contributions to pension funds. The necessary contribution rate can now be derived according to a simple concept: the present value of future pension benefits minus the sum of 2006 pension fund financial assets and the present value of employee contributions, all divided by the present value of future wages. The starting value of pension assets for state and local government pension plans—approximately $2.979 trillion in 2006—is obtained from the Federal Reserve Flow of Funds Accounts. Future wages are simulated within our model. The logic of this estimation is that the benefits that are promised to employees (including liabilities already made and promises that will be made in the future) must be paid from three sources: existing pension funds in 2006, contributions that employees will continue to make to those funds in the future, and contributions that employers will make to those funds in the future. Our analysis estimates the steady level of employer contribution, relative to wages, that would need to be made in every year between 2006 and 2050 to fully fund promised pension benefits. Although we are only interested in developing necessary contribution rates over the simulation time frame—that is, until 2050—we actually have to derive the contribution rate for a longer time frame in order to find the steady state level of necessary contributions. This longer time frame is required because the estimated contribution rate increases as the projection horizon increases and eventually converges in a steady state. If the projection period is of insufficient length, the steady level of contribution is not attained and the contribution rate necessary is understated. As such, all of the flows in the calculation extend 400 years into the future. We use a real rate of return on pension assets of 5.0 percent (rpenreal) to discount future flows when deriving present values. Equation 6 shows mathematically the estimate of the employer contribution rate. ) lr ) where: rpenreal is the real rate of return on pension assets Applying this analysis, we found that in aggregate, state and local government contributions to pension funds would need to increase by less than half a percent to fund, on an ongoing basis, the pension liabilities they have and will continue to accrue. In particular, the 2006 pension contributions for the sector amounted to 9 percent of wages, and our base case estimate is that the level would need to be 9.3 percent each year to fully fund pensions. We altered certain of our assumptions to examine the sensitivity of our model results. We found that the model results are highly sensitive to our assumptions regarding the expected real yield. For our primary simulations, we based the expected real yield on actual returns on various investment instruments over the last 40 years as well as the disposition of the portfolio of assets held by the sector over the last 10 years. This generated a real yield of 5 percent. But some pension experts have expressed concern that returns on equities in the future may not be quite as high as those in the past. In fact, some analysts believe that an analysis of this type should consider only “riskless returns.” Under such an approach we would assume that all pension funds are invested in very safe financial instruments such as government bonds. We estimated the necessary steady level of employer contributions holding all elements in the model stable except the real expected yield. In particular, we analyzed a 4 percent real yield and a 3 percent real yield—that latter of which is a reasonable proxy for a riskless rate of return. We found that if returns were only 4 percent, the necessary contribution rate would rise to 13.9 percent, and if we used a risk-free return of roughly 3 percent, the necessary contribution rates would need to be much higher—nearly 18.6 percent of wages. On the other hand, if real returns were higher than our base case level—perhaps 6 percent—the necessary contribution rate would only be only 5.0 percent, much lower than their current contribution rate. Most state and local government pay for retiree health benefits on a pay- as-you-go basis—that is, these benefits are generally not prefunded. We made projections of the pay-as-you-go cost of retiree health benefits for the sector, as a percentage of wages, in each year until 2050. To estimate the costs of retiree health benefits in future years, we made many of the same assumptions as for the pension analysis. In particular, we use the same method to develop projections of employment in the sector, the number of retirees, and the level of wages. An additional assumption for the health care analysis is that in future years, the same percentage of retirees of state and local governments will be enrolled in health insurance through their previous employer as we observe were enrolled in 2004—the most recent year for which data were available. To develop this measure, we use data from two sources. The Census Bureau’s State and Local Government Employee-Retirement System survey provided data on the total number of state and local retirees, and the Health and Human Services Department’s Medical Expenditure Panel Survey provided data on state and local government retirees who are covered by prior employer- provided health insurance. On the basis of these data sources, we found that the share of retirees with health insurance is 42 percent, and we hold this constant through the simulations. From the latter data source we also obtain the most recent year state and local government spending on health care for retirees. One of the most central assumptions we must make to estimate the pay-as- you-go health care costs for retirees in future years is the cost growth of health insurance. The cost of health care has been growing faster than gross domestic product (GDP) for many years. As such, we developed assumptions about how much faster health care costs would grow, relative to the economy, in future years. The extent to which the per person cost of health care is expected to grow beyond GDP per capita is called the “excess cost factor.” We developed these estimates based on our own research and discussions with experts. In particular, we assume that the excess cost factor averages 1.4 percentage points per year through 2035, and then begins to decline, reaching 0.6 percentage points by 2050. Using these assumptions, we develop a growth projection for the per capita costs of health care for retirees each year through 2050. The following equation shows that health care costs are assumed to grow with GDP per capita plus this excess cost factor. 7) (retgslchlth / rethlth ) = (retgslchlth (-1) / rethlth (-1)) * (hlthnheexcgr) * ((gdp / np) / (gdp (-1) / np (-1))) retgslchlth is the aggregate health care cost for the sector rethlth is the number of retirees with health insurance hlthnheexcgr is the excess cost factor for health insurance We found that the projected costs for retiree health benefits, while not a large component of state budgets, will more than double as a percentage of wages over the next several decades. In 2006, these costs amounted to approximately 2.0 percent of wages, and we project that by 2050, they will grow to nearly 5.0 percent of wages—a 150 percent increase. As with the projections of necessary pension contributions, our estimates of retiree health benefit costs are highly sensitive to certain of our assumptions. In particular, the assumptions regarding health care cost growth are critical. For example, if health costs were to only rise at the rate of GDP per capita, these costs would only grow, as a percentage of wages, from 2 percent today to 2.9 percent by 2050. Conversely, if health costs were to grow by twice the rate we assume in the base case, these costs, as a percentage of wages, would constitute 8.4 percent by 2050. Approximately 81% of all state and local workers statewideenforcement, correctional officers, and firefighters) sworn correctional employees) CalPERS health care program provides coverage to state employees, retirees, and their families, by law. In addition, most local public agencies and school employers can contract to have CalPERS provide these benefits to their employees (whether or not they contract for CalPERS retirement program). As of 2006, 1,137 entities participated in the program. Health plans offered, covered benefits, monthly rates, and co-payments are determined by the CalPERS Board, which reviews health plan contracts annually. Employers make a contribution toward the member’s monthly premiums, with members covering the difference between the employer’s contribution and the actual premium amount. The employer contribution rate is normally established through collective bargaining agreements. (CalPERS health care program—see above) Number of employers Retiree health benefits (CalPERS health care program—see above) (closed to legislators elected on or after 11/7/90) (Not eligible for CalPERS health care program) According to a report from the Legislative Analyst’s Office, schools and community college districts vary widely in the health benefits they provide their retirees. For example, in 2004 114 contracted with CalPERS for employee and retiree health coverage; about 265 purchased coverage through 11 benefit trusts, which allow multiple districts to join together to achieve economies of scale; 250 participate in the Self-Insured Schools of California joint powers agency, administered by Kern County; and the remaining districts either secure health benefits on their own or do not provide these benefits. The University of California offers continuation of medical, dental, and legal insurance to eligible members who elect monthly retirement income. Health and welfare benefits are not accrued or vested benefit entitlements. The University of California’s contribution toward the cost of medical and dental coverage is determined by the University of California and may change or stop altogether. (If a retiree elects a lump-sum cashout, all rights to continue retiree medical, dental, and legal benefits are waived.) Approximately 19% of all state and local workers statewide. A September 2005 survey by the California State Association of Counties found that of the 49 counties responding (of 58 total), including 8 of the 10 largest counties, 48 reported that retired employees are eligible for some type of health benefits. General (all others) Retirees are entitled to continue membership in the city’s Health Services System. Any premiums payable for coverage may be deducted from the retirement payment. Percentage participating Social Security (2004) State and local government pension plans Pension plans, by level of administration Occupations covered Approximately 87% of all state and local workers statewide. 1 Michigan’s Department of Civil Service, Employee Benefits Division, administers health insurance contracts for both active and retired state employees. For those in the defined benefit retirement plan (i.e., those hired before 3/31/97), current health plan premiums are 95% state-paid for retirees under age 65, and 100% state-paid for Medicare-eligible retirees. Dental and vision premiums are 90% state-paid. For those in the defined contribution retirement plan, there is a 10-year vesting requirement with an employer contribution of 3% for each year of service, capped at 90%. Pension plans, by level of administration Occupations covered 716 Retirees have the option of health coverage, which is funded on a cash disbursement basis by the employers. The system has contracted to provide the comprehensive group medical, hearing, dental, and vision coverage for retirees and beneficiaries. A significant portion of the premium is paid by the system, with the balance deducted from the monthly pension. (Pension recipients generally are eligible for fully paid master health plan coverage and 90% paid dental, vision, and hearing plan coverage.) 1 Under the Michigan State Police Retirement Act, all retirees have the option of continuing health, dental, and vision coverage. Retirees with this coverage contribute 5%, 10%, and 10% of the monthly premium amount for the health, dental, and vision coverage, respectively. The state funds 95% of the health and 90% of the dental and vision insurance. 1 Under state law, all retirees and their dependents and survivors receive health, dental, and vision insurance coverage. 159 The Supreme Court Justice, Court of Appeals, or elected officials may enroll in the state health plan when they retire and their premium rate is subsidized. All other judges may enroll in the state health plan if they wish to, but they must pay the entire premium cost. 685 MERS Premier Health provides group health coverage for public employers including employee and retiree medical, prescription drug, dental and vision benefits. (MERS also offers a Group Life and Disability Insurance Program.) The city will continue to pay the cost of hospitalization insurance, in accordance with collective bargaining agreements and city council resolutions in effect at the time of retirement. After age 65, if you are eligible for Medicare, the city will provide a supplement to your Medicare benefits. According to city officials, in the early 1980s, the city instituted a cost-sharing formula with general city employees and retirees for the cost of hospitalization insurance. The formula included multiple tiers reflecting various collective bargaining agreements and city council resolutions. In the last round of contract negotiations, however, the cost- sharing formula for general city employees was modified to an 80% city, 20% employee/retiree split. Percentage participating Social Security (2004) Approximately 99% of all state and local workers statewide. School district and community college employees medical and hospital insurance on behalf of retired members. Members and their dependents are eligible for OPERS health care coverage if the member is receiving a retirement allowance or benefit under the system. political subdivisions (optional, but irrevocable once elected) Approximately 1% of all state and local workers statewide. U.S. Bureau of Labor Statistics Data. State and Area Employment, Hours, and Earnings. Not seasonally adjusted. State and Local Governments, 2006. Barry T. Hirsch and David A. Macpherson. Union Memerhip nd Erning D Book. The Bureau of National Affairs, Inc., Washington, D.C.: 2006, Table 5a. U.S. Census Bureau Data. State and Local Governments Employee-Retirement Systems. 2005 data file. tem of Ste nd Locl Government: 2002 (2002 Census of Governments, Volume 4, Number 6, GC02(4)-6) U.S. Government Printing Office, Washington, D.C.: December 2004, Table 10. As we went to press, the most recent annual report available online for the Detroit General Retirement System was for 2005. The Governmental Accounting Standards Board (GASB) is an independent, private sector, not-for-profit organization that establishes standards of financial accounting and reporting for U.S. state and local governments. Governments and the accounting industry recognize the GASB as the official source of generally accepted accounting principles (GAAP) for state and local governments. GASB standards are intended to result in useful information for users of financial reports, and to guide and educate the public—including issuers, auditors, and users—about the implications of those financial reports. Standards relevant to state and local government retiree benefits are listed below. In addition to the contact named above, Bill J. Keller, Assistant Director; Amy D. Abramowitz; Joseph A. Applebaum; Susan C. Bernstein; Gregory J. Giusto; Richard S. Krashevski; Bryan G. Rogowski; Jeremy S. Schwartz; Margie K. Shields; Jacquelyn D. Stewart; Craig H. Winslow; and Walter K. Vance made important contributions to this report. CanagaRetna, Sujit M.. America’s Public Retirement System [Stresses in the System]. The Council of State Governments: October 2004. Employee Benefit Research Institute. Fundamentals of Employee Benefit Programs, Part Five: Public-Sector Benefits. Chapters 39 through 46. EBRI: Washington, D.C.: 2005. Legislative Analyst’s Office. Retiree Health Care: A Growing Cost for Government. LAO, Sacramento, California: February 17, 2006. Mattoon, Richard H. “Issues Facing State and Local Government Pensions.” Federal Reserve Bank of Chicago: Economic Perspectives, third quarter, 2007. National Conference on Public Employee Retirement Systems. Public Pensions & You. NCPERS, Washington, D.C.: 2006. Rajnes, David. State and Local Retirement Plans: Innovation and Renovation. (EBRI Issue Brief Number 235) EBRI, Washington, D.C.: July 2001. Ruppel, Warren. Wiley GAAP for Governments 2007: Interpretation and Application of Generally Accepted Accounting Principles for State and Local Governments. John Wiley and Sons, Inc., Hoboken, New Jersey: February 2007. Schneider, Marguerite. “The Status of U.S. Public Pension Plans: A Review with Policy Considerations.” Review of Public Personnel Administration, Vol. 25, No. 2. June 2005. 107-137. Schneider, Marguerite, and Fariborz Damanpour. “Public Choice Economics and Public Pension Plan Funding: An Empirical Test,” Administration & Society, Vol. 34, No. 1 (March 2002). The article bases its findings on an analysis of data from the PENDAT (pension data) survey data of several hundred pension plans (see Harris listing below). Brainard, Keith. Public Fund Survey Summary of Findings for FY 2005. NASRA, Georgetown, Texas: September 2006. The source of data for this survey is primarily public retirement system annual financial reports, and also includes actuarial valuations, benefits guides, system Web sites, and input from system representatives. The survey is updated continuously as new data, particularly annual financial reports, become available. Harris, Jennifer D. 2001 Survey of State and Local Government Employee Retirement Systems Survey Report. Public Pension Coordinating Council: March 2002. This report presents summary statistical analysis of state and local government employee retirement systems surveyed by the Public Pension Coordinating Council in the summer of 2001. The purpose of the survey was to obtain in-depth information about the current practices of public retirement systems regarding their administration, membership, benefits, contributions, funding, and investments. In 2001, 152 public employee retirement systems responded to the council’s survey, representing 263 retirement plans. The data set from this survey is referred to as PENDAT. Hirsch, Barry T., and David A. Macpherson. Union Membership and Earnings Data Book. The Bureau of National Affairs, Inc., Washington, D.C.: 2006. The Data Book has been published annually since 1994. Each year’s edition includes current earnings and unionization figures based on compilations from the Current Population Survey (CPS), the survey of U.S. households conducted monthly by the U.S. Census Bureau. While data on earnings and unionization at the national level and highly aggregated groups of workers are provided by the Bureau of Labor Statistics, the purpose of the Data Book is to provide these data for states and metropolitan areas, and for workers within narrowly defined industries and occupations. Kaiser Family Foundation and Health Research and Educational Trust. Employee Health Benefits: 2006 Annual Survey. Kaiser/HRET, Washington, D.C.: 2006. For this survey, telephone interviews were conducted with human resource and benefits managers from January to May 2006, based on a sample of 2,122 employers drawn from a Dun & Bradstreet list of the nation’s private and public employers with three or more workers. The sample included 227 state and local governments. Each employer was asked as many as 400 questions about its largest health plans, including questions on the cost of health insurance, offer rates, coverage, eligibility, health plan choice, enrollment patterns, premiums, employee cost sharing, covered benefits, prescription drug benefits, retiree health benefits, health management programs, and employer opinions. Mercer. Results of Mercer’s Survey of Governmental Employers on GASB 45. Mercer Health & Benefits LLC: 2006. These results were based on 58 responses received from a survey, sent in May 2006, to state, county and city governments, and to public school boards, colleges, and universities. The survey was a follow-up to the state and local employers with at least 500 employees that had participated in the 2005 National Survey of Employer-Sponsored Health Plans. Moore, Cynthia L., Nancy H. Aronson, and Annette S. Norsman. Is Your Pension Protected? A Compilation of Constitutional Pension Protections for Public Educators. AARP, Washington, D.C.: 2000. This publication provides a compilation of constitutional pension protections in 50 states, specifically concentrating on retirement systems that serve retired educators. The descriptions were reviewed by AARP and National Retired Teachers Association staffs, including the AARP Office of General Counsel. The constitutional context is current as of July 1998. According to one of the authors, however, although the report was done several years ago, there have been few changes in constitutional pension protections in recent years. National Association of Government Defined Contribution Administrators, Inc. 2006 Biennial State and Local Government Defined Contribution Plan Survey. NAGDCA, Lexington, Kentucky: 2006. This survey is conducted every 2 years, to obtain specific information on state and local governments’ 457 and 401(k) plans, and beginning with the 2006 survey, on their public 401(a) and 403(b) plans as well. The survey includes defined contribution plans that are the governments’ primary pensions plans, as well as those that are supplemental voluntary plans. In 2006, responses were received with information on a total of 105 state and local defined contribution plans, including 40 state 457 plans, 33 local government 457 plans, 10 state 401(k) plans, 3 local 401(k) plans, 11 state 401(a) plans, 4 local government 401(a) plans, 2 higher education 401(a) plans, 1 state 403(b) plan, and 1 higher education 403(b) plan. According to respondents, these plans held $87.9 billion in assets, received $6.2 billion in annual deferrals, and had approximately 1.6 million active participants in 2005. National Education Association. Characteristics of Large Public Education Pension Plans. NEA, Washington, D.C.; December 2006. Information for this publication was gathered between July and September 2006, and was based on consolidated annual financial reports, state treasurers’ reports, actuarial valuations, system audits, legislative or plan- related review commissions, plan handbooks and newsletters, departments of human resources’ guidelines for electing trustees, state legislators’ and governors’ Web sites containing information on legislative changes, state or local statutes, and publicly available communications between government officials and plan participants. Ranade, Neela K. Employer-Sponsored Retiree Health Insurance: An Endangered Benefit? Congressional Research Service, Domestic Social Policy Division, Washington, D.C.: April 13, 2006. This report summarizes the current coverage levels for retiree health insurance for public and private sector retirees. It outlines the provisions that govern employer accounting for postretirement health insurance plans in both the public and private sectors, and describes the public policy options that may be considered by Congress to address the problems created by the erosion of employer-sponsored retiree health insurance plans. Segal. Results of the Segal Medicare Part D Survey of Public Sector Plans. The Segal Group, Inc., New York, New York: Summer 2006. In May 2006, the Segal Company, in cooperation with the Public Sector HealthCare Roundtable, asked public entities about the actions they were considering for their retiree health care programs as the Medicare Part D program was being implemented. Responses were received from 109 state and local plans. U.S. Bureau of Labor Statistics. Employee Benefits in State and Local Governments, 1998. (Bulletin 2531) U.S. Department of Labor, Bureau of Labor Statistics, Washington, D.C.: December 2000. This bulletin presents the results of the 1998 Bureau of Labor Statistics Employee Benefits Survey, conducted jointly with the bureau’s Employment Cost Index. It is a survey of the incidence and detailed provisions of selected employee benefit plans in state and local governments. In 1998, the survey provides representative data for 16.5 million employees, and estimates cover all state and local government establishments in the United States. Data were collected from June 1998 to November 1998, from a sample of 1,011 government establishments chosen from unemployment insurance reports. The survey is to be updated again in 2008. U.S. Census Bureau. Census of Governments. A census of governments is taken at 5-year intervals as required by law under title 13, United States Code, Section 161. The 2002 census, similar to those taken since 1957, covers three major subject fields: government organization, public employment, and government finances. The unique and important nature of public employee retirement system data in the world of government finance requires the Census Bureau to conduct a universe survey each year (see next listing below). Thus, the starting point for this census of governments was the 2001 survey listing, which generated a final universe mail file of approximately 2,670 retirement systems. Results of the 2002 census are summarized in Employee-Retirement Systems of State and Local Governments: 2002 (2002 Census of Governments, Volume 4, Number 6, GC02(4)-6) U.S. Government Printing Office, Washington, D.C.: December 2004. U.S. Census Bureau. State and Local Governments Employee-Retirement Systems. An annual survey of public employee retirement systems administered by state and local governments throughout the nation. The 2005 State and Local Government Public Employee-Retirement Systems survey covered 2,656 public employee retirement systems for the fiscal years that ended between July 1, 2004, and June 30, 2005. U.S. Department of Health and Human Services, Agency for Healthcare Research and Quality, Medical Expenditure Panel Survey (MEPS).The Medical Expenditure Panel Survey, which began in 1996, is a set of large- scale surveys of families and individuals, their medical providers (doctors, hospitals, pharmacies, etc.), and employers across the United States. MEPS collects data on the specific health services that Americans use, how frequently they use them, the cost of these services, and how they are paid for, as well as data on the cost, scope, and breadth of health insurance held by and available to U.S. workers. The insurance component of the survey is conducted annually. A nationwide sample of employers, including state and local governments, is specifically designed so that national and state estimates of health insurance offerings can be made each year. Wisniewski, Stan, and Lorel Wisniewski. State Government Retiree Health Benefits: Current Status and Potential Impact of New Accounting Standards. (Workplace Economics, Inc., #2004-08) AARP, Washington, D.C.: July 2004. This publication is based on information from the Workplace Economics, Inc., proprietary database, developed over 15 years, on benefits provided to state government employees in all 50 states. The database is the product of an annual survey of state governments on their employee benefits as well as an analysis of state employee health insurance plan documents. In addition, data were gathered and analyzed from state governments’ annual financial reports. Workplace Economics, Inc. 2006 State Employee Benefits Survey. Workplace Economics, Inc., Washington, D.C.: 2006. The information in this report was collected by means of a written survey sent to all 50 states, followed by telephone and e-mail contacts to clarify information, and in some cases by confirmation with official documents or contacts with employee organizations. Because most states offer multiple sets of benefits to different groups or categories of employees, survey respondents were instructed to provide information on benefits that cover the largest number of employees or that were otherwise deemed representative. The information reported reflects benefits in effect January 1, 2006. Zion, David, and Amit Varshney. “You Dropped a Bomb on Me, GASB.” (Americas/United States Equity Research, Accounting & Tax) Credit Suisse, New York: March 22, 2007. This report focuses on the OPEB obligations for each of the 50 states, along with the 25 largest cities in the United States, based on a review of each state’s comprehensive annual financial report, as well as other documents such as actuarial studies, bond offering documents, and U.S. Census data, and phone calls with state officials. Information was obtained on unfunded OPEB liabilities for 31 states. Among the other 19 states, it was determined that 3 states— Mississippi, Nebraska, and Wisconsin—had no OPEB plans. For the remaining 16 states, estimates were made by multiplying the number of full-time equivalent employees for each state (based on 2004 Census data) by $100,000, a rough estimate based on the data gathered on the 31 states. State and Local Governments: Persistent Fiscal Challenges Will Likely Emerge within the Next Decade. GAO-07-1080SP. Washington, D.C.: July 18, 2007. Retiree Health Benefits: Majority of Sponsors Continued to Offer Prescription Drug Coverage and Chose the Retiree Drug Subsidy. GAO-07-572. Washington, D.C.: May 31, 2007. Employer-Sponsored Health and Retirement Benefits: Efforts to Control Employer Costs and the Implications for Workers. GAO-07-355. Washington, D.C.: March 30, 2007. Private Pensions: Information on Cash Balance Pension Plans. GAO-06-42. Washington, D.C.: November 3, 2005. State Pension Plans: Similarities and Differences Between Federal and State Designs. GAO/GGD-99-45. Washington, D.C.: March 19, 1999. Public Pensions: Section 457 Plans Pose Greater Risk than Other Supplemental Plans. GAO/HEHS-96-38. Washington, D.C.: April 30, 1996. Public Pensions: State and Local Government Contributions to Underfunded Plans. GAO/HEHS-96-56. Washington, D.C.: March 14, 1996. An Actuarial and Economic Analysis of State and Local Government Pension Plans. PAD-80-1. Washington, D.C.: February 26, 1980. Funding of State and Local Government Pension Plans: A National Problem. HRD-79-66. Washington, D.C.: August 30, 1979.
State and local retiree benefits are not subject, for the most part, to federal laws governing private sector retiree benefits. Nevertheless, there is a federal interest in ensuring that all Americans have a secure retirement, as reflected in the special tax treatment provided for both private and public pension funds. In 2004, new government accounting standards were issued, calling for the reporting of liabilities for future retiree health costs. As these standards are implemented and the extent of the related liabilities become known, questions have been raised about whether the public sector can continue to provide the current level of benefits to its retirees. GAO was asked to provide an overview of state and local government retiree benefits, including the following: (1) the types of benefits provided and how they are structured, (2) how retiree benefits are protected and managed, and (3) the fiscal outlook for retiree benefits and what governments are doing to ensure they can meet their future commitments. For this overview, GAO obtained data from various organizations, used our model that simulates the fiscal outlook for the state and local sector, and conducted site visits to three states that illustrate a range of benefit structures, protections, and fiscal outlooks. Cognizant agency officials provided technical comments which were incorporated as appropriate. The systems for providing retiree benefits to state and local workers--who account for about 12 percent of the nation's workforce--are composed of two main components: pensions and retiree health care. These two components are often structured quite differently. Importantly, state and local governments generally have established protections and routinely set aside monies to fund their retirees' future pension costs, but this typically has not been the practice for retiree health benefits. A model GAO developed to simulate the fiscal outlook for state and local governments indicates that, for the sector as a whole, (1) estimated future pension costs (currently about 9 percent of employee pay) would require an increase in annual government contribution rates of less than a half percent, and (2) estimated future retiree health care costs (currently about 2 percent of employee pay) would more than double by the year 2050 if they continue to be funded on a pay-as-you-go basis. Because the estimates are very sensitive to the assumed rates of return and projected rates of health care inflation, the model also indicates that if rates were to fall below historical averages, the funding requirements necessary to meet future pension and health care costs could become much higher. Nevertheless, state and local governments generally have strategies to manage future pension costs. In contrast, many are just beginning to respond to the newly issued standards calling for the reporting of retiree health liabilities, and they generally have not yet developed strategies to manage escalating retiree health care costs. Across the state and local government sector, the ability to maintain current levels of retiree benefits will depend, in large part, on the nature and extent of the fiscal challenges that lie ahead--challenges driven primarily by the growth in health-related costs for Medicaid, and for active employees as well as retirees. In future debates on retiree benefits, policy makers, voters, and beneficiaries will need to decide how to control costs, the appropriate level of benefits, and who should pay the cost--especially for health care.
Military personnel wounded since 2001—the exact number is not known— may qualify for financial benefits from several federal agencies. While SSA, VA, and DOD each provide disability benefits that are available to wounded warriors, these programs have different eligibility criteria and serve different purposes. SSA has two programs, Disability Insurance (DI) and Supplemental Security Income (SSI), to provide benefits for those whose disabilities prevent them from working. VA’s disability compensation program and DOD’s disability retirement program provide benefits to those with service-connected disabilities. (See app. II for a table summarizing the three agencies’ disability programs.) Estimates of the number of military personnel wounded, ill, or injured since 2001 vary widely depending on what conditions are counted and the source of data used. DOD has reported that about 34,000 servicemembers have been wounded in action in the OEF/OIF campaigns through the beginning of June 2009. On the other hand, our analysis of VA data shows that almost 250,000 OEF/OIF veterans were receiving VA disability compensation benefits as of July 2008. This figure may be larger than DOD’s estimate because, for example, some wounded warriors’ service- related medical conditions may have only appeared or been recognized after they separated from the military. Yet another estimate comes from a 2008 study by the RAND Corporation. Based on a survey of individuals deployed in support of OEF and OIF, this study estimated that more than 500,000 OEF/OIF veterans and servicemembers likely suffer from at least one of three mental health or cognitive conditions—PTSD, major depression, and probable TBI—that are likely linked to combat experience. The RAND estimate, unlike the VA estimate, includes wounded warriors whose medical conditions may not even have been medically diagnosed. To be eligible for either DI or SSI, an adult must be unable to engage in “substantial gainful activity”—typically work that results in earnings above a monthly threshold established each year by SSA—because of a medically determinable physical or mental impairment that is expected to last at least 12 months or result in death. Established in 1956, the DI program provides monthly benefits to individuals (and sometimes their dependents) whose work history qualifies them for disability benefits and whose impairment is disabling. To qualify for DI, individuals must have worked a certain minimum amount of time in employment covered by Social Security; the monthly amount of DI benefits is based on the worker’s past average monthly earnings. In fiscal year 2008, the average monthly DI benefit payment was $997. SSI is a means-tested income assistance program created in 1972 that provides a financial safety net for people who are aged, blind, or disabled, and have low incomes and limited assets. Unlike the DI program, SSI has no prior work requirements. The basic federal SSI benefit is the same for all individual beneficiaries. This basic, monthly SSI benefit may be reduced if an individual has other income or receives in-kind (noncash) support or maintenance. In fiscal year 2008, the average monthly SSI benefit payment was $476. Some individuals with disabilities can receive both DI and SSI benefits if they meet both DI’s work history requirements and SSI’s income and asset limits. The process to determine a claimant’s eligibility for SSA disability benefits is complex, involving several state and federal offices. A claimant first completes an application, or claim, for DI or SSI benefits, which includes information regarding illnesses, injuries, or conditions and a signature giving SSA permission to request medical records from medical care providers. Once the SSA field office staff verify that nonmedical eligibility requirements are met, the claim is sent to the state’s Disability Determination Services (DDS) office for determination of medical disability. If the claim is approved, a claimant will be notified and will receive benefits, including limited retroactive benefits for some DI claimants. If the claim is rejected, a claimant has 60 days to request that the DDS reconsider its decision. If the DDS reconsideration determination concurs with the initial denial of benefits, the claimant has 60 days to appeal and request a hearing before an SSA administrative law judge (ALJ). A claimant may appeal an unfavorable ALJ decision to SSA’s Appeals Council—which includes administrative appeals judges (AAJ) and appeals officers—and, finally, to the federal courts. SSA and DDS officials (for example. disability examiners, ALJs, and AAJs) determine disability based on evidence, such as medical findings and statements of functional capacity, obtained during the initial determination process and updated as necessary at each appeal level. VA’s disability compensation program compensates veterans for the average loss in civilian earning capacity that results from injuries or diseases incurred or aggravated during military service, regardless of current employment status or income. VA uses the Department of Veterans Affairs Schedule for Rating Disabilities (VASRD) as criteria to determine the disability percentage rating. Disability ratings range from 0 (least severe) to 100 percent (most severe) in increments of 10 percent. For example, an amputation of a thumb could result in a 40 percent rating, while an amputation of a ring finger could result in a 20 percent rating. If the veteran is found to have one or more service-connected disabilities with a combined rating of at least 10 percent, VA will pay monthly compensation, with amounts higher for those with higher ratings. As of December 2008, a veteran without dependents rated at 20 percent disabled receives monthly compensation of $243, whereas a veteran without dependents rated at 100 percent disabled receives $2,673 monthly. The veteran can be re-evaluated if the extent of the veteran’s disability changes or if new or newly recognized medical conditions or illnesses occur and are determined to have been caused or aggravated by military service. The veteran may, upon re-evaluation, receive a different percentage rating than they initially received. DOD’s disability system awards compensation to those servicemembers who are found to be no longer medically fit for duty, and, if they do not recover, separates them from the military. Like VA, DOD uses the VASRD as criteria to determine the disability percentage rating. The amount of DOD disability payment is based on whether the disability is service- related, the years of service, and the disability rating percentage. Servicemembers who receive a disability rating of 30 percent or higher— regardless of their years of service—generally will be retired and may be eligible for lifelong benefits, including retirement pay and health insurance for the servicemember and their family. These servicemembers are placed on the Permanent Disability Retired List (PDRL). Servicemembers with fewer than 20 years of service who are separated with a disability rating of 20 percent or less receive a single lump-sum severance payment; those with at least 20 years of service who receive a disability rating under 30 percent are placed on the PDRL and receive on-going monthly benefits. Servicemembers may also be placed on the Temporary Disability Retired List (TDRL) if they are found to be medically unfit for duty by military examiners, but their service-related illnesses or injuries are not stable enough to assign them a permanent disability rating. Once a permanent disability rating can be assigned, depending on the rating and the servicemember’s years of military service, DOD may place those on the TDRL on the PDRL, grant them a one-time severance payment, or find them fit to return to military service. Military servicemembers pay Social Security payroll taxes and may qualify for DI benefits on the basis of their service if they have a sufficient number of work quarters. Military service, for Social Security purposes, includes service in the Army, Navy, Air Force, Marines, and Coast Guard, including service in the Reserves and National Guard. Those who served in the military during certain periods of time are given credits that increase the earnings that SSA looks at in determining DI eligibility and benefit levels. Individuals are considered to have earned an additional $300 per quarter for military service between 1957 and 1977, and an additional $100 for every $300 actually earned—up to a maximum of $1,200—per calendar year between 1978 and 2001. In addition to DI benefits, wounded warriors may qualify for SSI benefits if they have low income and assets. Furthermore, active duty servicemembers may receive DI benefits while also receiving military pay. Social Security law gives SSA the discretion to determine when an individual is actually performing work at the substantial gainful activity level, regardless of the individual’s income. Accordingly, in determining eligibility for benefits in cases when an applicant is receiving military pay, but is also receiving medical treatment or is on limited duty status, SSA assesses not the servicemember’s actual earnings, but rather what the servicemember would be paid in the civilian labor market for their work. If the wage that the servicemember could theoretically receive in the civilian workforce is less than SSA’s substantial gainful activity level, then SSA will not automatically disqualify the servicemember from receiving disability benefits. Given that the SSA, VA, and DOD disability programs have different purposes and eligibility criteria, not all wounded warriors found to be disabled by VA or DOD—even those found 100 percent disabled—will necessarily qualify for SSA disability benefits. SSA’s DI program, for example, is an insurance program designed to replace workers’ income if they become unable to work due to disability. If an individual can perform any kind of work in the U.S. economy, then this person is not eligible for DI benefits. By contrast, VA’s disability compensation program is designed to compensate veterans for the average loss of earnings resulting from a particular disability—regardless of the actual effect on an individual veteran’s capacity to work. A veteran could be rated 100 percent disabled under the VASRD by VA and receive VA disability benefits, but also be working and earning more than SSA’s substantial gainful activity level, and therefore be ineligible for SSA benefits. While SSA programs have different eligibility criteria than VA or DOD programs, and SSA makes its own disability decisions separate from those of VA or DOD, SSA regulations state that SSA eligibility determinations are to take disability decisions by other agencies into account. SSA regulations specify that the agency must evaluate all the evidence in a case record that may have a bearing on its disability decision, including decisions by other governmental and nongovernmental agencies. SSA has issued guidance to its adjudicative staff—including DDS claims examiners and SSA ALJs and AAJs—that when making a disability decision they must consider all the evidence in the case file, including a finding of disability by another agency such as VA. Adjudicators should explain the consideration given to a finding of disability by another agency in the case record for initial and reconsideration claims or in the notice of decision for hearing claims. However, the SSA regulations also note that a finding of disability by another government agency, related to a different benefit program, is not binding on SSA’s own disability determination. Depending on their circumstances, wounded warriors may receive cash benefits simultaneously from SSA and from DOD or VA. However, receipt of DOD or VA benefits sometimes affects the amount of SSA benefits that a wounded warrior may receive. Specifically, SSI benefits are reduced if the beneficiary also receives benefits from certain other government programs such as DOD or VA benefits. On the other hand, DI benefits are not offset by VA or DOD disability benefits, so wounded warriors may receive their full DI benefits along with benefits from these other agencies. By contrast, DOD disability retirement benefits and VA disability compensation benefits are awarded on the basis of the content of disabilities incurred during military service rather than on the basis of income. Wounded warriors may receive their full DOD or VA disability benefits along with any SSA benefits. More than 16,000 wounded warriors have applied for SSA disability benefits and their approval rate has been about 60 percent; in addition, about 4 percent of wounded warriors receiving DOD or VA disability benefits are also receiving SSA disability benefits. At least 16,000 wounded warriors have applied for SSA disability benefits since 2001, with close to 90 percent of applications submitted in 2007 and 2008. A sizable minority of wounded warriors submitted their applications more than a year after injury, often foregoing some retroactive benefits because of this delay in applying. About 60 percent of the applicants with no pending claims have been approved for benefits by SSA, with a majority of the approved claimants having a mental health disorder as their primary impairment. Among wounded warriors who were receiving disability benefits from DOD or VA, about 10 percent had applied for SSA disability benefits, and about 4 percent of the total cohort of wounded warriors receiving DOD or VA benefits were also receiving SSA benefits. Wounded warriors with higher disability ratings were more likely to have applied and to be receiving SSA disability benefits. Since 2001, at least 16,000 wounded warriors have applied for SSA disability benefits. This figure represents the number of applicants whom SSA has identified in its systems as wounded warriors, but because of limitations with SSA’s data sources the total number of applicants is likely higher. In its databases, SSA identifies wounded warriors as those applicants who were disabled while in active military service on or after October 1, 2001, regardless of where the disability occurred. SSA uses two sources of information to identify wounded warrior applicants. First, DOD provides SSA with a list every week of military personnel who have been wounded, injured, or become ill while in support of the OEF/OIF campaigns. According to DOD, though, while this list is the only data they have available, it is not necessarily a comprehensive list of all military personnel wounded, ill, or injured in OEF/OIF. In addition, applicants may self-identify as wounded warriors when they submit their applications. However, SSA only started collecting this information from applicants in 2005, so wounded warriors who applied before 2005 and are not in DOD’s data would have been missed. Meanwhile, we found that of the more than 16,000 wounded warriors appearing in SSA’s data as having applied for SSA benefits, virtually all had submitted claims for DI benefits, with many also submitting a claim for SSI. (Applicants may file claims for DI benefits on the basis of their past work history or SSI benefits on the basis of low income and assets.) Among these 16,000 applicants, about 53 percent had applied for both DI and SSI benefits concurrently, another 47 percent applied for DI benefits only, and less than 1 percent applied for SSI benefits only. We also found that the vast majority of the applicants submitted their claims within the last two calendar years. Although wounded warriors who have served since 2001 have applied for SSA disability benefits in every year dating back to 2002, the number of claims submitted has increased each year—almost 90 percent of the applicants submitted their claims in 2007 or 2008 (see fig. 1). While most of the wounded warriors for whom SSA has determined a disability onset date submitted their applications to SSA within a year of sustaining their wound, injury, or illness, a substantial number submitted applications a year or more after injury. Among wounded warriors whose claims were approved at the initial stage of adjudication and for whom SSA has established a disability onset date, more than half submitted their applications within a year of this onset date. But about 40 percent submitted claims at least 12 months after disability onset, and almost a quarter submitted claims at least 18 months after onset. (See fig. 2.) DI claimants who file an application more than 17 months after disability onset will forego some retroactive benefits, because their 12-month retroactive benefit period will not extend all the way back to the end of the 5-month waiting period (between the disability onset date and the date a claimant can start receiving DI benefits). SSA has for some time been considering a legislative proposal to change or waive the statutory retroactive benefit period specifically for wounded warriors to help those who may not apply soon enough after disability onset. An SSA official reported that the agency’s consideration of such a proposal is based in part on anecdotal evidence from local offices about wounded warriors not applying for benefits in a timely manner. SSA has not provided us with any further details about their potential proposal. Turning to the claim outcomes for wounded warrior applicants identified in SSA’s data, we found that the approval rate for these wounded warriors was 60 percent. By comparison, among all individuals who filed a claim for DI benefits in 2007, 32 percent had a claim approved by August 2008. Specifically, about 7,600 of the wounded warrior applicants had at least one claim allowed for either DI or SSI. The vast majority of these claim allowances have come at the initial stage of adjudication. Of the approximately 7,600 claimants with allowed claims, 96 percent had claims allowed at the initial stage, and 4 percent at the hearings level. Of the remaining wounded warriors with no allowed claim, the majority had their claims denied, though many still had at least one claim pending a final decision. (See fig. 3.) Breaking out DI and SSI claims, we found that wounded warriors were much more likely to have claims allowed for DI than for SSI, and a major reason was that many wounded warriors exceeded SSI income and asset requirements. Of the almost 16,000 wounded warriors who submitted a claim for DI, about 7,500 had a DI claim allowed. Of the roughly 8,500 who submitted a claim for SSI—often concurrently with their DI claim—fewer than 900 had an SSI claim allowed. Wounded warriors’ SSI claims were much more likely to be denied for technical—nonmedical—reasons. Of those who applied for SSI, 64 percent were denied for technical reasons, particularly for having income or resources above the program’s limits. In contrast, of those who applied for DI, less than 1 percent were denied for technical reasons, such as insufficient work history. Finally, looking at the medical conditions of wounded warriors whose claims were allowed, we found that the majority were found eligible on the basis of mental health disorders. That is, the primary medical impairment for about 60 percent of wounded warriors allowed at the initial or reconsideration stages, fell into the overall category of mental disorders (see fig. 4). By contrast, among all disabled workers who had DI claims allowed in 2008, 23 percent had a mental disorder. Among wounded warriors with mental disorders, the three most common specific conditions—accounting for more than half of all the wounded warriors with allowed claims—were anxiety disorders, mood or affective disorders, and chronic brain syndrome. According to SSA officials, wounded warriors with PTSD are included in either the anxiety disorder category or the mood or affective disorder category. Also, those with TBI may be classified as having chronic brain syndrome, or as having any of a number of neurological conditions depending on their symptoms. Aside from mental disorders, most of the remaining wounded warriors with allowed claims had a musculoskeletal condition as their primary impairment. The most common musculoskeletal conditions—accounting for about 13 percent of all wounded warriors with allowed claims—were back disorders, amputations, and lower limb fractures. Among almost 251,000 wounded warriors receiving DOD or VA disability benefits, we found that about 26,000 (10 percent) of this cohort had applied for SSA disability benefits and about 9,000 (4 percent) were receiving SSA benefits as of July 2008. It is likely that some applicants still had claims pending as of July 2008—the cut-off date for our analysis—and could have eventually been approved for benefits. It is also possible that some of the applicants had been approved for SSA benefits in the past, but were no longer receiving them in July 2008. Our analysis focused on wounded warriors receiving DOD disability retirement VA disability compensation benefits, or both, in July 2008 who had served in active duty at any point since September 2001. The vast majority (92 percent) of the cohort of 251,000 wounded warriors received benefits only from VA. Another 2 percent received benefits only from DOD. The remaining wounded warriors received benefits from more than one agency. (See fig. 5.) We also found that among wounded warriors receiving DOD or VA disability benefits those with higher disability ratings were more likely to have applied for and been receiving SSA disability benefits. Among the approximately 7,400 wounded warriors rated as 100 percent disabled by DOD or VA, 63 percent had applied for SSA benefits and 41 percent—of the entire group of 7,400—were receiving them in July 2008. By comparison, among those rated as less than 50 percent disabled, fewer than 10 percent had applied for and fewer than 2 percent were receiving SSA disability benefits. (See fig. 6.) Importantly, not all wounded warriors rated as 100 percent disabled by DOD or VA will necessarily qualify for SSA disability benefits, given differences in the purposes and eligibility criteria between the agencies’ programs. Similarly, some wounded warriors rated as less severely disabled by DOD or VA could qualify for SSA benefits. DOD disability retirement and VA disability compensation benefits —unlike SSA benefits—are awarded on the basis of service- connected disabilities. According to SSA officials, some veterans found by DOD or VA to have only moderate service-connected disabilities could have experienced an additional injury or illness after separating from the military, which later qualified them for SSA benefits. Outreach to help wounded warriors learn about and apply for disability benefits under SSA has increased in recent years as a result of both agency actions at the national level and local initiative taken at major medical facilities for the wounded. The agencies involved— SSA, DOD, and VA— have taken individual and joint steps to specifically target this population, tailor information to military needs, and, in effect, re-enforce the message that wounded warriors may qualify for SSA benefits. At the local level we found many instances of DOD staff and local SSA representatives working in concert at DOD treatment facilities to convey this information to the wounded. However, collaboration between VA and SSA was less prevalent at VA medical centers. A number of challenges can affect agency efforts to reach out and assist this population, particularly for those diagnosed with PTSD and TBI. SSA, DOD, and VA have taken individual and collaborative steps at the national level to establish and maintain outreach to wounded warriors who may qualify for SSA disability benefits. Since 2007, SSA has increased its outreach efforts to wounded warriors by initiating contact with DOD and VA medical facilities and tailoring its information about disability benefits to meet their needs. SSA officials reported that they had begun to specifically target the wounded warrior population partly as a result of their participation in a 1-year pilot project with the U.S. Navy. Conducted from 2006 to 2007, the pilot was designed to help reduce the time it was taking for information to be disseminated and decisions to be made on SSA benefits for servicemembers. Following this project, SSA headquarters directed its field offices to increase their efforts to work with DOD and VA medical facilities to conduct outreach. SSA officials reported that their field offices have subsequently been in contact with wounded warriors or staff at almost 30 DOD facilities and more than 50 VA facilities across the country. In addition, the agency has developed publications and established a Web site with SSA disability benefits information that has been customized for wounded military personnel and veterans. These resources explain, for example, how military pay affects eligibility for SSA’s programs and how to apply. (See fig. 7.) SSA officials told us although SSA does not have any formal practices in place to determine whether or not outreach has been effective, there is an internal workgroup—consisting of staff from headquarters, regional, district, and DDS offices—that meets quarterly to discuss and share successful local efforts in serving wounded warriors, as well as best practices for reaching wounded warriors. DOD has taken a number of steps to help wounded warriors learn about and apply for SSA disability benefits, particularly since 2007. The department has incorporated information about SSA benefits into the many case management programs run by the service branches—programs already providing a range of advice and assistance to wounded warriors and their families. We found that these programs—the Army Wounded Warrior Program, the Warrior Transition Brigade, the Marine Wounded Warrior Regiment, and the Navy Safe Harbor Program—covered SSA benefits in their case management review materials or list SSA as a resource on their Web sites. (See fig. 8 for an example of one of the assessment tools used in the case management programs to help ensure SSA disability benefits are discussed.) Additionally, DOD case managers for these programs received briefings from SSA as part of their training regimen, according to DOD officials. A DOD official told us the agency has also included SSA disability programs in the portfolio of information to be disseminated through its Recovery Coordination Program, which was included in the comprehensive recovery plan for wounded warriors developed in response to the National Defense Authorization Act for Fiscal Year 2008. This recovery plan does not formally spell out how DOD case managers should conduct outreach on SSA benefits or work with SSA. DOD has also added SSA benefit information to a variety of resources available to wounded warriors—for example, DOD’s Compensation and Benefits Handbook dedicates a section to the subject. DOD also has Web sites that contain some information about SSA and it has added links to SSA’s own wounded warrior Web site. Meanwhile, information about SSA disability benefits has also been included in joint briefings that DOD, VA, and other agencies conduct at all DOD bases and military treatment facilities for servicemembers who are being discharged from the military. VA has also taken some steps to inform and help wounded warriors apply for SSA disability benefits. In 2007, the agency, working with DOD, established the Federal Recovery Coordination Program, which is a counterpart to DOD’s Recovery Coordination Program, to provide a case management framework for ensuring that the most severely wounded warriors are engaged by SSA’s benefit programs, as well as by other support services. Additionally, VA case managers for the Federal Recovery Coordination Program have received information from SSA specialists regarding SSA disability benefits. Meanwhile, staff from other VA programs in existence prior to 2007, such as the Office of Seamless Transition, told us they sometimes include the topic of SSA benefits when case managers interview veterans about their clinical and nonclinical needs. VA has also included SSA disability information in other resources that are available to wounded warriors, such as VA’s Federal Benefits for Veterans and Dependents and VA’s Web site. This Web site also has links to SSA’s Web site. In addition, the agency’s Veterans Benefits Administration (VBA), which administers VA disability compensation benefits, includes on its application forms some limited information about SSA benefits and SSA contact information. VBA officials told us, however, that their disability compensation claims representatives do not have any particular guidance or training for how or whether to refer wounded warriors to SSA when they apply for VA benefits. According to a VA official, while VA began a phone outreach campaign in 2006 to OEF/OIF veterans to remind them about VA’s health and benefit services, this campaign has not included information about SSA’s programs and benefits. Turning to the sites we contacted, we often found that DOD medical treatment facilities were working with local SSA offices to better inform wounded warriors about SSA disability benefits and help them apply. This level of interaction was less common at VA sites. At all of the DOD military treatment facilities we visited, including the four major facilities that treat the most severely wounded warriors, SSA representatives were coming onto the base to conduct briefings and field questions about SSA disability benefits from wounded warriors and staff. At many of these locations, SSA representatives had also taken benefit applications. For example, at Walter Reed Army Medical Center, SSA representatives were holding office hours twice a week to take applications, and provide regular briefings to wounded warriors and case managers. At Brooke Army Medical Center, SSA claims representatives were conducting presentations at weekly briefings for recovering servicemembers and also taking benefit applications. Moreover, according to our survey of wounded warriors who had applied for SSA disability benefits they faced few obstacles to finding SSA representatives, if and when they did seek help. Based on the survey, we estimate that almost three quarters—73 percent—of wounded warrior applicants found it was easy to make initial contact with an SSA representative. Most of the local SSA staff we spoke with told us that in recent years they have reached out to local DOD personnel to obtain access to DOD’s facilities where potentially eligible wounded warriors can be found. Some of the local SSA offices we spoke with noted that it had taken some time to establish working relationships at some of the DOD bases they contacted. For example, they said certain DOD personnel at the Brooke Army Medical Center had not been routinely cooperative until the base commander issued an order. In addition, several local SSA officials told us that staff turnover at DOD facilities creates a challenge. For example, officials from the Colorado Springs SSA office told us that because of frequent staff turnover at Evans U.S. Army Hospital (Fort Carson), new DOD staff are often unaware of SSA benefits and as a result do not refer servicemembers to SSA who might be eligible for SSA disability benefits. In the course of our site visits, we also learned that in serving DOD facilities and the wounded warrior population, most of the SSA offices we conducted interviews at had representatives who had developed some expertise in working with military personnel. Some local SSA officials told us these representatives know what questions to ask and what documents to collect to support wounded warriors’ claims. Wounded warriors also reported that they found SSA officials helpful. Based on our survey, we estimate that about 63 percent found the information they received from SSA representatives to be helpful. In one location where SSA did not employ specialized claims representatives, DOD case managers had become more familiar with SSA disability benefits and had developed a customized walk-through presentation to help wounded warriors better navigate SSA’s application process. See figure 9 for an excerpt from this presentation. On the other hand, at the medical centers operated by VA through its Veterans Health Administration, we found less contact underway between local SSA and VA staff. According to VA officials at the medical centers we contacted, none of them had an SSA representative onsite to answer questions or take applications, though some of the VA officials we spoke with thought it would be a good idea. Case managers at three of the five sites we contacted said they refer veterans to a specific point of contact ata local SSA office who may specialize in wounded warrior claims. Officials at the other two sites did not always have a point of contact at all of the local SSA offices in the area. Case managers at one of these sites said tha when they refer veterans to SSA without a specific point of contact, the veterans have challenges navigating SSA’s automated phone system or reaching a live representative. In addition, case managers at two sites to us that they had received some training from SSA on SSA’s disability benefits. However, case managers at the other sites told us they had n received training from SSA on SSA disability benefits and believed having such training would be beneficial. Table 1 summarizes the outreach activities at DOD and VA medical facilities we contacted. As for the regional VBA offices, officials at the offices we contacted said their claims staff sometimes refer wounded warriors to SSA when they apply for VA benefits. For example, officials at one regional office we contacted said veterans applying for certain types of benefits are also referred to SSA. However, according to officials at two of the sites we contacted, they generally do not provide any SSA literature or work with SSA personnel in local outreach efforts such as VA’s Stand Down event and its Welcome Home event. When wounded warriors ask about SSA at these outreach events, regional office officials said they will generally respond by providing SSA’s phone number. Agency efforts to help wounded warriors learn about and apply for SSA disability benefits early in the recovery process can be affected by challenges, including some servicemembers not being ready to apply and misinformation that may be passed among wounded warriors. According to local officials from all three agencies we spoke with, wounded warriors may not be ready or able to learn about SSA disability benefits early in their recovery. Some may not have yet accepted the severity of their injuries, while others have physical or psychological conditions that have not stabilized. Case managers told us the timing of when they inform wounded warriors about SSA disability benefits is therefore determined on a case-by-case basis. Based on our survey, we estimate that the timing of when wounded warriors first learned about SSA disability benefits varied widely, with 44 percent reporting they first heard about the benefit 12 months or longer after injury (see fig. 10). It was also suggested by case managers and some recovering servicemembers that wounded warriors may be hesitant to apply for SSA benefits because of misinformation about SSA disability benefits given by their colleagues. Wounded warriors were just as likely to first hear about SSA disability benefits from their peers as from their anagers, according to our survey estimates (see fig. 11). Moreover, both m wounded warriors and agency officials told us that the information about SSA benefits that gets passed among military personnel is not always accurate. For example, some recovering servicemembers said their peers DOD or VA case had told them they would not qualify for SSA disability benefits and others said they had heard that in order to avoid paying retroactive benefits, S SA initially denies benefits to servicemembers who have been hospitalized for long periods of time. Another challenge with regard to wounded warriors applying for SSA disability benefits is reported to be the benefit application itself, which reflects civilian more than military careers. At some of the sites we contacted, case managers and wounded warriors themselves said it was difficult for wounded warriors to complete the employment section of the application because it does not take into account, for example, the difference between a civilian 8-hour day and military duty, which requires a 24-hour obligation. Of all the aspects of the application process we ask about in our survey, wounded warriors most often reported underst the SSA application to be difficult (see fig. 12). ed anding an have ds. Finally, outreach efforts face an elusive target at times, given that TBI a PTSD are conditions that are not necessarily diagnosed early and that can develop months or years later. According to some agency officials, servicemembers who have yet to be diagnosed with TBI and PTSD are not nd likely to receive information on all of the services, programs, or benefits to which they could be entitled. Several agency officials reported they have challenges in diagnosing TBI and PTSD, because these conditions are often not as clearly apparent as some other physical disabilities. Wounded warriors also may not realize they have these conditions and as such, may not have sought out medical attention. A RAND study reported that 57 percent of wounded warriors who probably have undiagnosed TBI had not been evaluated by a physician for brain injury. as established a nationwide policy requiring its district offices, the state SSA has expedited the processing of wounded warrior benefit claims, with ses in the transfer of DOD assistance from VA and DOD; however, weaknes medical records to SSA can prolong decision-making for some cases. SSA h DDS offices, hearing offices, and the Appeals Council to give priority to wounded warrior claims. For its part, DOD has worked with SSA by sharing wounded warriors’ key identification information that SSA can use to target their claims for expedited processing. VA has also worked with SSA to accelerate information sharing to DDS offices. However, wounde warrior claim decisions can still be prolonged in some instances beca of challenges in receiving DOD medical records. DOD medical records are transferred to SSA as paper documents, a process which can take weeks or months, according to DDS officials. Although DOD stores some medical records electronically, DOD and SSA have not developed the capacity for DOD to transfer its records electronically to SSA. Since 2005, SSA has identified wounded warrior applicants for the purpose of expediting their claims. SSA attempts to identify wounded, injured, or taken ill during military service since October 1, 2 001, regardless of whether the disabling event occurred domestically or overseas. In order to give priority to these cases, SSA identifies wounded warriors in two ways. First, starting in 2005, applicants can self identify as a wounded warrior when submitting the disability application. SSA add ed questions on its disability claims application to enable servicemembers orveterans to identify themselves as having served in the military and to indicate their dates of service. Secondly, in a 2008 memorandum of understanding, DOD agreed to send weekly electronic updates to SSA with the key identification information of servicemembers who were wounded, injured or became ill in the OEF/OIF theaters, to further assure that military applicants are identified. Any applicant is automatically iden as a wounded warrior if their name appears on the DOD list. The agency requires that its district offices, DDS offices, and hearing offices give priority to wounded warrior claims. SSA uses a process originally developed to expedite terminally ill (critical) case a for all wounded warrior disability claims. For such cases, SSA staff who receive a disability claim request via SSA’s toll free phone number are hin 3 required to schedule an applicant interview at an SSA field office wit working days, if possible, to take the full application. Then the SSA field office refers the application to a state DDS office for review, and follows up within 7 days to ensure receipt by the DDS system. DDS staff, in tur n, are required to prioritize wounded warrior cases, by considering them as early as possible. Moreover, DDS staff are instructed to comprehensively consider these cases by exploring all potential physical and mental impairments, including those that may be suggested by any of the medicalevidence, such as signs of PTSD. Lastly, as with critical cases, SSA staff atthe hearing stage are required to schedule wounded warrior cases in the first available open hearing slots. Additionally, SSA and DDS offices track and monitor wounded warrior cases to ensure that they receive expedited handling. SSA’s regional offices track these claims through the different stages of the disability determination process, including appeals. One SSA regional office, for example, generates reports listing all wounded warrior cases, including the level of adjudication for each claim and its status. Also, officials i some DDS offices told us that they generate reports on processing time specifically for wounded warrior cases, such as reports that list the pending wounded warrior cases, which may include other details of the claim. Officials in one DDS office stated that they monitor all wounded warrior cases at specific intervals after they have been received from SSA—30 days, 45 days, and 90 days—to ensure cases are not unnece delayed. To support agency processing of wounded warrior cases, SSA officials reported that they have created specific training and briefing materials for SSA and D DS staff, including information about TBI and PTSD. According to an SSA official, the agency provided on-site training at local offices a through videoconferences, and instructed staff to be alert to reported symptoms that may be related to TBI and PTSD. The SSA official told us that within DDS offices, all disability examiners received some training on TBI and PTSD. In addition to training, SSA reported that it provides training materials and guidance for staff to use when handling wounded warrior cases. For example, SSA issued guidance in August 2007 that cites indicators of possible TBI conditions, such as exposure to an improv ised explosive device blast, a motor vehicle accident, or a fall. This policy also specifies clinical markers, such as the loss of consciousness for more than 6 hours following traumatic brain injury. At several DDS offices, DDS staff reported that they received training material from SSA on TBI and PTSD. Finally, SSA issued guidance and reminders to staff on the proper treatment of wages for active duty soldiers. In spite of their wounded warrior training, all of the nine DDS offic contacted reported difficulties, for several reasons, in making determinations on TBI and PTSD cases. First, though officials at almost all the DDS offices we spoke with reported that DOD and VA records generally provide enough information regarding physical disabilities, this is not necessarily the case regarding psychological problems. The DDS officials told us that, consequently, they generally need to order more consultative exams to assess potential cases of TBI or PTSD. Secondly, some DDS officials also reported that servicemembers sometimes ask them not to include a diagnosis of TBI or PTSD in their records. According to officials in two of these DDS offices, servicemembers have either been reluctant to discuss symptoms of PTSD with DDS staff, or asked DDS staff not to pursue a mental health disability claim. Finally, because PTSD and in some cases, TBI, may take several months to manifest, and because these conditions may improve over time, officials in some DDS offices said it can be difficult to determine whether they meet the 12-month duration requirement. To address this challenge, SSA has a specific TBI policy: If a TBI disability determination is not possible within 3 months of injury, SSA and DDS staff are required to defer adjudication until at least 3 months es we after injury, and may defer again until at least 6 months after injury, in order to observe and evaluate the claimant’s condition. Given these challenges, eight of the nine DDS offices we visited had assigned specific, experienced examiners to process wounded warrior n claims. An SSA official stated that those DDS examiners who focus o wounded warrior cases—ranging between three and five specialists at each of several DDS offices we visited—received more extensive training. These officials reported employing special techniques for working with wounded warriors. For example, some officials told us they can draw t conclusions from pieces of evidence in DOD and VA medical records tha to indicate possible PTSD or TBI, and then collect additional information make a determination regarding the presence of those conditions, even if such conditions have not been officially diagnosed. Also, an official in one , DDS office reported that she looks at the VA disability evaluation decision when available, for useful information about an applicant’s medical condition; though officials in that office and several others said VA’s actual disability decisions do not affect their own determinations. Further, officials told us they work aggressively to obtain medical records from the VA, DOD, or private medical facilities. Officials in one DDS office said they conduct a special review of any wounded warrior claim that is going to be denied to ensure that all the evidence was considered appropriately. Lastly, because wounded warrior claimants may take more time to respond to information requests, one DDS official typically gives them more time, and will do more to track down their needed information records. To furthe worked with VA to improve the quality of and speed with which VA forwards veterans’ medical records to the DDS offices. According to an SSA official, due to DDS concerns about the timeliness and condition VA medical records, SSA and VA began to work together in 1999 to improve data sharing. A period of pilot testing resulted in the creation of the Standard Summary, which is an electronic extraction of a standa set of pertinent medical records from a patient’s overall records that is transmitted electronically to DDS offices upon request. In 2006, VA issued a directive to formalize use of the Standard Summary by VA medical facilities. Generally, the Standard Summary includes 2 years of patient health information and 4 years of major exams and patient discharge summaries. Specifically, the Standard Summary includes information su e as the onset dates of all known health problems for a patient, the futur clinic appointments scheduled for a patient with VA providers, and all outpatient medical visits. (See app. IV for a list of elements included in th r expedite claims processing for wounded warriors, SSA has e Standard Summary.) In order to establish usage of the Standard Summa ry, the appropriate local VA staff are encouraged to work with local DDS staff to enter an agreement and ensure that their computer systems are compatible. Individual DDS offices may also work with VA hospitals to tailor the data extracted by the Standard Summary for their needs. official told us that, as of June 2009, roughly 75 percent of VA hospitals nationwide were using the Standard Summary to send information to DDS offices. Wounded warrior claim decisions are prolonged in some instances because of difficulties in obtaining DOD medical records, most DDS officials reported. Generally, DOD military treatment facilities (MTF) provide paper-based records to DDS offices. Many of the DDS offices we visited reported lengthy wait times—ranging from a few weeks to a few months—to receive paper records from MTFs. Corroborating this point, DOD case managers at several MTFs also reported delays in obtaining wounded warrior medical records. Several DDS officials we spoke with said such delays can slow down the disability determination process. DDS staff in some locations reported that local MTFs had made efforts to reduce these delays. For example, an official in the Texas DDS office noted that the Carl R. Darnall Army Medical Center and Brooke Army Medical Center (the local MTFs) agreed to allow a DDS staffer onsite full time in the records office to respond to DDS medical record requests. At several other DDS offices, staff reported that local MTFs responded to their requests for faster records response by designating a point of contact to handle all DDS requests, which has improved response time. Decision making can also be affected by the cumbersome nature of DOD records. Many DDS staff said DOD records are lengthy, and can number hundreds, if not thousands, of pages. They also noted that some medical documents that are referenced in the records are missing when received by DDS offices, or can sometimes be redundant. DOD has, in recent years, begun to computerize military medical records and make them transferable to VA, in part due to a recent statutory requirement. According to DOD officials, DOD currently stores certain types of information—such as patient consultations and evaluation notes—in electronic medical records at 21 MTFs, representing more than half of DOD’s inpatients. The agency is working toward storing more records electronically. Furthermore, after a 10 year joint effort, DOD is transferring some medical records electronically to VA. For example, since January 2009, DOD and VA have been electronically exchanging drug allergy information on more than 27,000 shared patients. Ultimately, better exchange of electronic medical records could allow servicemembers to transition seamlessly between the two departments with a single, comprehensive medical record. In part, DOD and VA’s electronic records exchange is spurred by recent legislation. Congress has mandated that DOD and VA jointly develop and implement, by September 30, 2009, electronic health record systems or capabilities that are fully interoperable and comply with applicable federal interoperability standards. By comparison, DOD and SSA’s electronic exchange efforts are far less developed. An SSA official told us the agency would like to work with DOD to develop some mechanism for obtaining medical records electronically. For their part, DOD officials also expressed interest in working with SSA to resolve the issue of electronic records transfer. Nevertheless, the agencies have no formal plans to do so. SSA and DOD have told us that one approach for sharing records electronically could be through the Nationwide Health Information Network (NHIN), an emerging new federal technology. Although SSA, DOD, and VA are all participating in this technology initiative, the NHIN is still under development in a pilot testing phase. Wounded warriors with severe disabilities, who have made significant sacrifices in the line of duty, may face challenges in supporting themselves financially. Disability benefits available through SSA can be a critical part of the financial assistance they receive. Congress has required that DOD and VA help wounded warriors gain access to the benefits and services they need, including SSA disability benefits, and to consult with other relevant federal agencies in doing so. Outreach to wounded warriors about SSA benefits has, in fact, been stepped up since 2007, particularly by SSA and DOD in several key sites where there has been a well-coordinated message reinforced by each agency. These efforts may well have contributed to the substantial increase in wounded warrior applications for SSA disability benefits since 2007. Certainly, significant challenges remain to reaching those warriors who are currently being discharged, as evidenced by the numbers who have reported to us difficulties with the SSA benefit application, or the fact that nearly one in four of those with approved claims has foregone retroactive benefits because they did not apply soon enough after their injury. The sooner that SSA completes its analysis of whether a legislative fix to the retroactive benefits formula is warranted, the sooner this information can be brought to the Congress for consideration. While VA has also taken steps to help wounded warriors who have become veterans, collaboration between VA and SSA at VA medical centers has not been consistent. Furthermore, VA does not provide its claims processing staff with any guidance on referring veterans to SSA when they apply for veterans’ benefits. Yet the risk of not reaching wounded warriors is greater once they are discharged and re-enter civilian life. Absent a commensurate level of outreach on the part of VA and SSA, it seems likely that some veterans who are, in fact, severely disabled, will not receive all the financial support to which they are entitled. Of particular concern are veterans discharged prior to 2007—when the focus on outreach increased—and those with impairments such as PTSD and TBI, which may emerge after discharge from the military. While not all veterans rated as 100 percent disabled by DOD or VA will necessarily be eligible for SSA benefits, the fact that more than one-third of these severely disabled veterans have not even applied for SSA disability benefits suggests there is more to be done in this area. Finally, SSA policies for accelerating the processing of wounded warrior applications appear to have had a positive effect. Yet some wounded warriors will still experience a long wait for an eligibility determination if their DOD medical records are slow to be transferred. The inability of DOD and SSA to share records electronically undermines SSA’s ability to fully expedite the process. Yet SSA and DOD lack a strategy for integrating their systems to enable such transfers. Fortunately, electronic records exchange is an issue that has attention across a number of federal agencies, and current technology initiatives may present models for DOD and SSA to consider using. To improve wounded warriors’ access to SSA disability benefits, we are making the following recommendations: 1. The Commissioner of Social Security should move ahead with his consideration of the need for a legislative proposal to amend the DI program’s retroactive benefit period for wounded warriors, given the unique challenges faced by this population in applying for benefits in a timely manner. 2. The Secretary of Veterans Affairs and the Commissioner of Social Security should work together to improve outreach to veterans on SSA disability benefits. In doing so, the VA and SSA should, in particular, seek to reach veterans who either were discharged between 2001 and 2007; have disabilities that manifest after service such as PTSD; or were assigned a 100 percent disability rating. Specific actions that VA could take include issuing guidance to VA medical centers and regional offices for referring veterans to SSA and including information about SSA disability benefits in VA’s phone outreach campaign to OEF/OIF veterans. In addition, SSA could work with VA to ensure stronger coordination between local SSA offices and VA medical facilities, for example by making sure that VA medical centers have a point of contact at a local SSA office or receive training from SSA staff on SSA benefits. 3. The Secretary of Defense and the Commissioner of Social Security should work together to better meet SSA’s need for obtaining military medical records in a timely manner for processing DI and SSI applications from wounded warriors. This effort should consider how to ensure records that are stored electronically are also electronically transferable. We provided a draft of this report to the Secretary of Defense, the Secretary of Veterans Affairs, and the Commissioner of Social Security for review and comment. In their comments, SSA and VA agreed with our findings and recommendations and noted actions they plan to take to address our recommendations. For example, SSA indicated that it plans to move ahead with a legislative proposal to amend the disability benefit retroactive period for wounded warriors, and VA indicated that VBA officials plan to meet with SSA officials to discuss how the two agencies can ensure that veterans receive information about SSA disability benefits. DOD provided no formal comments. All three agencies also provided technical comments, which were incorporated as appropriate. SSA’s and VA’s comments are reproduced in appendices V and VI. As agreed with your office, unless you publicly announce the content of this report early, we plan not to further distribute until 30 days from the report date. We are sending copies of this report to the Secretary of Defense, the Secretary of Veterans Affairs, the Commissioner of Social Security, relevant congressional committees, and others who are interested. We will also provide copies to others on request. The report is also available at no charge on GAO’s Web site at http://www.gao.gov/. Please contact me at (202) 512-7215 or bertonid@gao.gov if you or your staff have any questions about this report. Contact points for the Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix VII. We were asked to examine (1) the number of wounded warriors who have applied and been approved for Social Security Administration (SSA) disability benefits, and the extent to which wounded warriors who receive Department of Defense (DOD) or Department of Veterans Affairs (VA) disability benefits also receive SSA benefits; (2) the extent to which SSA, DOD, and VA have worked to inform wounded warriors about and help them apply for SSA disability benefits, and the challenges that confront this outreach effort; and (3) whether the agencies have taken any steps to facilitate the processing of wounded warriors’ SSA disability benefit claims. In addressing these questions, we have focused our research on wounded warriors who have become wounded, injured, or ill while in active duty since 2001. More specifically, to answer the questions, we reviewed policy and other documents from SSA, DOD, and VA and interviewed officials responsible for outreach or case management policies at each agency. We also interviewed officials from several organizations that represent veterans, including Disabled American Veterans, Paralyzed Veterans of America, Vietnam Veterans of America, and the United Spinal Association. To learn about agencies’ efforts at the local level, we interviewed staff—and sometimes recovering servicemembers—at a number of DOD and VA medical facilities, SSA field offices, Disability Determination Services (DDS) offices, and VA regional offices. We obtained and analyzed administrative data from SSA, DOD, and VA on wounded warriors’ utilization of financial benefits from all three agencies. Finally, we conducted a mail survey of wounded warriors who had applied for SSA disability benefits during fiscal year 2008. We conducted this performance audit from March 2008 through September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To learn about local efforts by SSA, DOD, and VA to conduct outreach to wounded warriors and facilitate claims processing, we selected 12 sites where wounded warriors receive medical care—7 DOD medical treatment facilities (MTF) and 5 VA medical centers (see fig. 14). We selected these locations based on several factors. First, we selected the four military treatment facilities that serve most of the severely wounded servicemembers: Walter Reed Army Medical Center, National Naval Medical Center (Bethesda), Naval Medical Center San Diego, and Brooke Army Medical Center. We also selected two of the four VA polytrauma centers that serve severely wounded servicemembers and veterans: VA Palo Alto Health Care System and Hunter Holmes McGuire VA Medical Center. We then selected additional sites (1) that are in regions of the country where varying numbers of wounded warriors had SSA disability benefit claims pending between July 2007 and March 2008, (2) that have varying numbers of recovering servicemembers in affiliated Army warrior transition brigades, and (3) that according to SSA have varying levels of collaboration with local SSA offices. At the selected locations, we conducted on-site or telephone interviews with a range of local staff from different agencies. We spoke with DOD and VA case managers, hospital management staff, VA liaisons, and medical records personnel about their practices for informing wounded warriors about SSA benefits, helping them apply, or both. At almost every site we also contacted the SSA district offices and DDS offices that serve the medical facility. At the SSA and DDS offices, we spoke with office managers and other staff, including public affairs staff, claims examiners, and claims representatives about their practices for informing wounded warriors about SSA benefits, helping them apply, and processing wounded warrior claims. At five of the MTFs we also held discussion groups with recovering servicemembers to learn about their experiences in hearing about and applying for SSA disability benefits. We interviewed a total of 45 servicemembers and 1 veteran at these five sites. At each of these sites, we asked MTF personnel to identify servicemembers who were wounded, ill, or injured since October 2001 and had applied, or were in the process of applying, for SSA benefits. See table 2 for the list of offices we contacted. We also contacted three Veterans Benefits Administration (VBA) regional offices that administer VA disability compensation benefits—Atlanta, Georgia; Waco, Texas; and San Diego, California. We selected these locations based primarily on their proximity to DOD and VA sites we contacted. At these locations we spoke with VBA public liaisons or regional office management staff about their practices for informing discharged veterans about SSA benefits and helping them apply. To examine wounded warriors’ utilization of SSA disability benefits, we obtained and analyzed administrative data from several SSA databases. We analyzed data from the electronic folder’s Structured Data Repository (SDR) on wounded warriors’ claims at the initial, reconsideration, and hearing stages of adjudication, including claimant demographic information, date of disability onset, date of application, primary diagnosis, and the decision on the claim. We analyzed data from the Master Beneficiary Record (MBR) on Disability Insurance (DI) claims that were denied for technical—nonmedical—reasons, including the application date and the reason for denial. We analyzed data from the Supplemental Security Record (SSR) on Supplemental Security Income (SSI) claims that were denied for technical reasons, again including application date and reason for denial. Finally, we used data from SSA’s 831 file to analyze the elapsed time from application date to DDS decision date, both for wounded warriors and for SSA’s overall DI and SSI caseloads. SSA provided us with data on all applicants who have been flagged in its systems as wounded warriors. Applicants have been flagged as wounded warriors when they self-identify (since 2005) or when they appear on a list of wounded warriors provided weekly by DOD (since 2008). In addition, SSA identified wounded warriors who appear on the DOD list—which is cumulative and includes those wounded in Operation Enduring Freedom (OEF) and Operation Iraqi Freedom (OIF)—but who applied prior to 2008, when SSA started using this list to flag applicants. To assess the reliability of these SSA data, we reviewed agency documents, interviewed agency officials, and performed electronic testing of certain data. We found several limitations with the data, including the lack of complete data on claims at the Appeals Council and federal court stages of adjudication, and unreliable application date data for claims beyond the initial stage. However, we found the data we used to be sufficiently reliable for our reporting objectives. In addition, to determine the extent to which wounded warriors receiving DOD or VA disability benefits also receive SSA disability benefits, we obtained data from DOD and VA and asked SSA to match these data against its databases. DOD provided us with monthly data from the Retired Pay File for the period January 2000 to September 2008, on individuals who were first on the Temporary Disability Retired List (TDRL) or Permanent Disability Retired List (PDRL) in January 2000 or later. These data included the individuals’ social security numbers, DOD disability rating, and monthly benefit amount received. We created a customized file of DOD data by extracting the data only for those individuals who were on the TDRL or PDRL in July 2008 and first appeared on the TDRL or PDRL in November 2001 or later, i.e. who were in active duty in October 2001 or later. (The Retired Pay File does not include an indicator of whether an individual is disabled as a result of service in the OEF/OIF campaigns.) VA provided us with data from its VETSNET system on veterans receiving VA disability compensation benefits as of July 2008, who were identified in VETSNET as being OEF/OIF veterans. OEF/OIF veterans are identified by VA in two ways: (1) through a regular match of VA data with a Defense Manpower Data Center (DMDC) file containing the social security numbers of OEF/OIF veterans, and (2) by VA staff when they take disability benefit applications from veterans. The data file provided by VA included veterans’ social security number, disability ratings, and benefit amounts received in July 2008. We then asked SSA to match the social security numbers contained in the DOD and VA files against its MBR and SSR databases, to determine whether these individuals had ever applied for SSA disability benefits and if they were receiving SSA benefits in July 2008. To assess the reliability of the VA and DOD data, we interviewed agency officials, reviewed agency documents, and performed electronic testing of certain data. We found several limitations with the data, including the fact that those identified as OEF/OIF veterans in VA’s data may not include all veterans who served since 2001 and are receiving VA disability compensation benefits. However, we found the data we used to be sufficiently reliable for our reporting objectives. To learn about the experiences and opinions of wounded warriors who have applied for SSA disability benefits, we conducted a survey of a random sample of wounded warriors who had applied for SSA disability benefits during fiscal year 2008. We conducted this mail survey from December 2008 to March 2009. The survey included questions on topics such as how wounded warriors learned about SSA disability benefits, when they learned about SSA benefits, and what challenges, if any, they faced in completing the SSA application. To identify our sample of wounded warriors, we used SSA’s SDR, MBR, and SSR databases, which contain data on SSA applicants who have been identified as wounded warriors. We drew a random sample of 350 wounded warriors out of a total universe of 10,438 wounded warriors who SSA’s databases indicated had applied for disability benefits in fiscal year 2008. We considered respondents to be in scope for our survey if they had applied for disability benefits in fiscal year 2008 and had not separated from the military prior to 2001. Survey results based on probability samples are subject to sampling error. The sample is only one of a large number of samples we might have drawn from the respective population. Because each sample could have provided different estimates, we express our confidence in the precision of our two particular samples’ results as 95 percent confidence intervals. These are intervals that would contain the actual population values for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the respective study populations. Unless otherwise noted, the margin of error for percentage estimates from this survey is plus or minus 8 percentage points or less at the 95 percent level of confidence. The overall response rate for our survey was 53 percent. To assess the potential for bias in our estimates, we used SSA administrative data to examine differences between the respondents and nonrespondents to our survey. Our analysis showed that respondents to our survey were typically older and more educated than individuals in our sample who did not respond to the survey; these differences were significant at the 95 percent confidence level. In light of these results, we conducted three separate sets of tests to compare individuals in different age and education categories. The first set of tests examined demographic and usage variables from the survey—that is, whether or not wounded warrior had received information or application assistance, or had taken certain steps in the course of completing the SSA disability application. The second set of tests examined wounded warriors’ opinions about sources of information or assistance and steps taken in the course of applying for benefits. The third set of tests examined item nonresponse for each item in the survey. Only two of the initial differences we observed persisted after adjusting for multiple comparisons. First, we found that older respondents were more likely to have served in the Reserves or National Guard than their younger counterparts. However, we did not find any significant differences in demographics, usage of specific resources, opinions, or item nonresponse between respondents that served in the Reserves or National Guard and those that did not. Second, we found that college-educated individuals were more likely than those without a college degree to have found information from VA personnel helpful—of individuals with an opinion, 100 percent of college educated individuals found VA personnel to be very or somewhat helpful, compared to 76 percent of those without a college degree. Our failure to detect systematic differences does not guarantee that our results are free from potential nonresponse bias; as with any survey, to the extent respondents differ from nonrespondents in undetected ways, our results should be interpreted with caution. However, with the exception of the two items noted above, we were unable to detect large or systematic differences in the reported experiences and opinions of individuals in different age or education categories, and believe it is not misleading to generalize the results to the population of wounded warrior SSA disability applicants in fiscal year 2008. The practical difficulties of conducting any survey may also introduce other types of errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data were entered into a database or were analyzed can introduce unwanted variability into the survey results. With this survey, we took steps to minimize these nonsampling errors. For example, GAO staff with subject matter expertise designed the questionnaires in collaboration with GAO survey specialists. Then the draft questionnaires were pretested with wounded warriors to ensure that the questions were relevant and clearly stated—with the purpose of reducing nonsampling error. When the data were analyzed, a second, independent GAO analyst independently verified all analysis. We took several steps to enhance our response rate. We resent the survey to nonrespondents after about 5 weeks, and conducted follow-up phone calls to nonrespondents encouraging them to complete the survey, with the option to complete the survey over the phone with GAO staff, after about 7 weeks. If in the course of our phone calling we learned a correct address, we sent another copy of the survey to the current address. These comprehensive nonrespondent follow-up efforts, as well as our nonresponse bias analysis, give us confidence that our overall survey results can be generalized to the population of wounded warriors applying for SSA disability benefits in fiscal year 2008. We used a set of six important outreach practices as criteria for assessing the three agencies’ outreach to wounded warriors regarding SSA disability benefits: outlining of strategic goals for the campaign and determination of a identification and researching of the target audience to understand some of its key characteristics, such as size of the population, ethnic and racial composition, linguistic groups, geographic location, and awareness or knowledge of the outreach subject; establishment of strategic partnerships with other entities that are stakeholders in the issue to get help with planning and implementing the outreach campaign; targeting of the outreach message with audience specific, culturally sensitive content and use of mediums and languages that are the most appropriate for the audience; reinforcement of the message with repetition and different mediums, especially when targeting people who may be challenging to serve; and development and implementation of measures for evaluating the effectiveness of the outreach campaign. We identified these practices through a review of prior GAO and external reports that addressed outreach campaigns primarily for social and employment programs. We also contacted several external organizations that have some expertise in outreach for social programs to obtain their feedback on the identified practices. Comments were received from the National Association of Social Workers and the National League of Cities, and these comments were incorporated into our final set of practices. Disability compensation. Compensate veterans for average reduction in earnings capacity due to their disabilities. Disability retirement. Provide financial support to servicemembers whose disabling conditions render them unable to perform their military duties. SSI: Provide cash payments assuring a minimum income for aged, blind, or disabled individuals who have very limited income and assets. DI and SSI (adult disability). Applicant is unable to perform substantial gainful activity due to condition, which is expected to last at least a year or to result in death. Applicant must be a veteran with a diagnosis of an injury or disease that is found to be disabling. Also, there must be evidence of an in-service occurrence or aggravation of the injury or disease. Servicemembers must have a permanent disability that renders them unfit to perform their military duties. Also, the injury must be connected to service in the military. DI. Must have worked a minimum amount of time in employment covered by Social Security. SSI. Total earned and unearned countable income must be below the federal benefit rate. First $20 of unearned income and various types of public assistance are not counted. Also, must have less than $2,000 in countable resources for an individual. Individuals are deemed disabled or not disabled. No percentage ratings or partial disability determinations. Claimant is assigned a rating for each service-connected disability based on the VA’s Schedule for Rating Disabilities (VASRD). Ratings range from 0 to 100 percent. Servicemembers are assigned a rating for each unfitting condition based on the VASRD, with ratings ranging from 0 to 100 percent. Individual unemployability (IU) benefits provide certain veterans with compensation at the 100-percent level if their disabilities prevent them from working, even though their disability was rated below 100 percent under the VASRD. If the disabilities are deemed noncompensable, because they were not in the line of duty, servicemembers are separated without benefits. If servicemembers receive a disability rating of less than 30 percent, they receive a lump sum payment upon separation. (If servicemembers have 20 or more years of service and a disability rating below 30 percent, they could still be eligible for disability retirement.) If servicemembers have a disability rating of 30 percent or greater, they will be separated from the military and receive monthly cash benefit payments (PDRL) unless conditions are not stable, whereas the servicemembers are placed on TDRL. DI. Based on past average monthly earnings. SSI. Basic monthly payment is the same for all beneficiaries. This basic amount is reduced when beneficiary receives certain types of earned and unearned income. If the veteran is found to have one or more service-connected disabilities with a combined rating of at least 10 percent, VA will pay monthly compensation. The benefits range by rating, with a higher rating resulting in greater compensation. Veterans with severe disabilities may be entitled to special monthly compensation, which provides payments greater than the compensation payable under the VASRD for the disability. If the unfitting disabilities are determined to be service-connected, DOD takes into account the years of service and the disability rating percentage. One of about 1,300 SSA field offices assesses a claimant’s nonmedical eligibility. If the claimant meets these criteria, a state DDS office evaluates the claimant’s medical eligibility. Claimants may appeal an initial DDS decision back to the DDS, then to an administrative law judge (ALJ), the SSA appeals council, and ultimately the federal court. One of VA’s 57 regional offices assesses eligibility and assists in obtaining relevant evidence to substantiate the claim. Such evidence includes veterans’ military service records (including medical records), medical examinations, and treatment records from VA medical facilities and private medical service providers. Claimants may appeal an initial VA decision to the Board of Veterans’ Appeals, then, ultimately, to different levels of federal courts. The servicemember goes through a medical evaluation board (MEB) proceeding, where medical evidence is evaluated, and potentially unfit conditions are identified. The member then goes through an informal physical evaluation board (PEB) process, where a determination of fitness or unfitness for duty is made and, if found unfit for duty, a combined percentage rating is assigned for all unfit conditions and the servicemember is discharged from duty. If servicemembers disagree with the informal PEB’s findings and recommendations, they can, under certain conditions, appeal to the formal PEB’s reviewing authority The services differ in how many opportunities they offer servicemembers to appeal. Below are the questions on our survey of wounded warriors, followed by the breakdown of answers we received. We received a total of 182 responses from wounded warriors. All answers are generalizable to the overall population of wounded warriors who applied for SSA disability benefits during fiscal year 2008, with a margin of error of plus or minus 8 percent or less, except questions 6g and 7f. A nonresponse bias analysis revealed that older and college educated individuals were more likely to respond to our survey than their younger and less-educated colleagues, but did not reveal systematic differences in wounded warriors’ overall experiences and opinions in applying for SSA benefits. For a complete description of our survey methods and nonresponse bias analysis, see appendix I. 1. In what branch of the military do/did you serve? 2. Are you now or were you at any point a National Guard or Reserve member? 3. As of today, have you separated from military service? If so, in what year did you separate from military service? 4. After you were wounded, injured, or became ill, how did you first learn about Social Security disability benefits? (Check only one answer) Another servicemember or separated servicemember DOD medical center or DOD personnel (for example, nurse case manager, social worker, AW2 Advocate, WTU or WWR squad leader) VA medical center or other VA personnel (for example, nurse case manager, social worker, benefits counselor) 5. How long after you were wounded, injured, or became ill did you first learn about Social Security disability benefits? (Check one answer) 6. After you first learned about Social Security disability benefits, how helpful, if at all, was any of the additional information you may have received about Social Security disability benefits from any of the following sources? (Check one answer in each row. If you did not receive any information from a source, check the first column.) Did not receive information from this source 6b. Another servicemember(s) or separated servicemember(s) 6c. Family member(s) 6d. Social Security Administration representative (s) 6e. DOD medical center or other DOD personnel (for example, nurse case manager, social worker, AW2 Advocate, WTU or WWR squad leader) 6f. VA medical center or other VA personnel (for example, nurse case manager, social worker, benefits counselor) 6g. Someone else (please specify) 7. How helpful, if at all, was any of the assistance you may have received from any of the following people in helping you complete your Social Security disability benefits application? (Check one answer in each row. If you did not receive any assistance from a person, check the first column.) Number of wounded warrior respondents 7a. Another servicemember(s) or separated servicemember(s) 7b. Family member(s) 7c. Social Security Administration representative 7d. DOD medical center or other DOD personnel (for example, nurse case manager, social worker, AW2 Advocate, WTU or WWR squad leader) 7e. VA medical center or other VA personnel (for example, nurse case manager, social worker, benefits counselor) 7f. Someone else (please specify) 8. How long after you were wounded, injured, or became ill did you first apply for Social Security benefits? 9. When completing the Social Security disability benefits application, how easy or difficult were the following? (Check one answer in each row. If you did not do this, check the first column.) 10. Were you aware that if an application for Social Security disability benefits is denied, the decision can be appealed? We have reproduced the Standard Summary template as it appears in VA’s 2006 directive, Veterans Health Administration Directive 2006-024, to formalize use of the Standard Summary. The VA SSA-DDS Standard Summary is an electronic extraction of a standardized set of pertinent medical records from a patient’s overall VA records. In addition to the contact named above, Brett Fallavollita, Lorin Obler, David Forgosh, Rebecca Makar, and Joy Myers made major contributions to this report; Bonnie Anderson, Rebecca Beale, Elizabeth Curda, Patricia Owens, and Kelly Shaw provided guidance; Stuart Kaufman, Anna Maria Ortiz, Minette Richardson, Beverly Ross, Vanessa Taylor, and Walter Vance provided methodological support; Susan Bernstein helped draft the report; Jessica Botsford and Daniel Schwimer provided legal advice; and Matthew Goldstein provided research assistance. Recovering Servicemembers: DOD and VA Have Made Progress to Jointly Develop Required Policies but Additional Challenges Remain. GAO-09-540T. Washington, D.C.: April 29, 2009. Army Health Care: Progress Made in Staffing and Monitoring Units that Provide Outpatient Case Management, but Additional Steps Needed. GAO-09-357. Washington, D.C.: April 20, 2009. Military Disability Retirement: Closer Monitoring Would Improve the Temporary Retirement Process. GAO-09-289. Washington, D.C.: April 13, 2009. Social Security Administration: Further Actions Needed to Address Disability Claims and Service Delivery Challenges. GAO-09-511T. Washington, D.C.: March 24, 2009. Information Technology: Challenges Remain for VA’s Sharing of Electronic Health Records with DOD. GAO-09-427T. Washington, D.C.: March 12, 2009. Electronic Health Records: DOD’s and VA’s Sharing of Information Could Benefit from Improved Management. GAO-09-268. Washington, D.C.: January 28, 2009. Social Security Disability: Collection of Medical Evidence Could Be Improved with Evaluations to Identify Promising Collection Practices. GAO-09-149. Washington, D.C.: December 17, 2008. Military Disability System: Increased Supports for Servicemembers and Better Pilot Planning Could Improve the Disability Evaluation Process. GAO-08-1137. Washington, D.C.: September 24, 2008. Electronic Health Records: DOD and VA Have Increased Their Sharing of Health Information, but More Work Remains. GAO-08-954. Washington, D.C.: July 28, 2008. Federal Disability Programs: More Strategic Coordination Could Help Overcome Challenges to Needed Transformation. GAO-08-635. Washington, D.C.: May 20, 2008. DOD and VA: Preliminary Observations on Efforts to Improve Care Management and Disability Evaluations for Servicemembers. GAO-08-514T. Washington, D.C.: February 27, 2008. VA Health Care: Mild Traumatic Brain Injury Screening and Evaluation Implemented for OEF/OIF Veterans, but Challenges Remain. GAO-08-276. Washington, D.C.: February 8, 2008. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. Health Information Technology: HHS Is Taking Steps to Develop a National Strategy. GAO-05-628. Washington, D.C.: May 27, 2005. Military and Veterans’ Benefits: Enhanced Services Could Improve Transition Assistance for Reserves and National Guard. GAO-05-544. Washington, D.C.: May 20, 2005.
Disability benefits available through the Social Security Administration (SSA) can be an important source of financial support for some wounded warriors, and Congress has mandated that the Departments of Defense (DOD) and Veterans Affairs (VA) help them learn about and apply for such benefits. GAO was asked to determine: (1) how many wounded warriors have applied and been approved for SSA benefits and the extent to which they are receiving benefits from across the three agencies; (2) what steps DOD, VA, and SSA have taken to inform wounded warriors about SSA benefits, and the challenges that confront this process; and (3) steps taken by all three agencies to facilitate the processing of wounded warrior disability claims. Focusing on those wounded since 2001, GAO reviewed policy documents, contacted DOD and VA medical facilities, surveyed wounded warriors, and analyzed administrative data. As of December 2008, about 7,600 of the16,000 wounded warriors who have applied for SSA disability benefits since 2001 have been approved. The majority filed their applications since 2007. Also, a sizable minority of approved claimants filed long enough after injury that they lost some retroactive benefits; SSA is considering a legislative proposal to change the retroactive period for wounded warriors. Among wounded warriors receiving DOD or VA disability benefits, 4 percent were receiving SSA benefits. In addition, more than 6 percent had applied but were not receiving SSA benefits; some still had claims pending. Those with higher disability ratings from DOD or VA were more likely to receive SSA benefits. To varying degrees, SSA, DOD, and VA have increased outreach to help wounded warriors learn about and apply for SSA disability benefits. Since 2007, SSA has increased its outreach to DOD and VA medical facilities and has tailored benefit information for wounded warriors. DOD--and to some extent VA--have incorporated SSA information into their case management practices as well. Locally, DOD and SSA staff have worked together to reach servicemembers, but collaboration has been less common at VA hospitals. Meanwhile, there are challenges to reaching and working with this population. Many of the wounded warriors may not be ready or able to hear about SSA benefits early in their recovery. Also, brain injuries and mental health disorders can impede many wounded warriors' ability to absorb outreach information and complete the benefit application. With help from DOD and VA, SSA has been able to expedite processing of wounded warrior claims. SSA has established a nationwide policy requiring its offices to give priority to wounded warrior claims. For their part, DOD helps SSA identify claimants who are wounded warriors, and VA has expedited the transfer of its medical records and histories to SSA. However, DOD's paper-based transfer of medical records to SSA is slow, which can prolong the process by weeks or months, according to claims processing staff.
One of the key provisions of the President’s Management Agenda, released in 2001, is the expansion of electronic government. To implement this provision, OMB sought to identify potential projects that could be implemented to address the issue of multiple federal agencies’ performing similar tasks that could be consolidated through e–government processes and technology. To accomplish this, OMB established a team called the E– Government Task Force, which analyzed the federal bureaucracy and identified areas of significant overlap and redundancy in how federal agencies provide services to the public. The task force noted that multiple agencies were conducting redundant operations within 30 major functions and business lines in the executive branch. For example, the task force found that 10 of the 30 federal agencies it studied had ongoing activities in the National Security and Defense line of business, while 13 of the 30 agencies had ongoing activities related to Disaster Preparation and Response Management. To address such redundancies, the task force evaluated a variety of potential projects, focusing on collaborative opportunities to integrate IT operations and simplify processes within lines of business across agencies and around citizen needs. Twenty-five projects were selected to lead the federal government’s drive toward e–government transformation and enhanced service delivery. In its e–government strategy, published in February 2002, OMB established a portfolio management structure to help oversee and guide the selected initiatives. The five portfolios in this structure are “government to citizen,” “government to business,” “government to government,” “internal efficiency and effectiveness,” and “cross-cutting.” For each initiative, OMB designated a specific agency as the managing partner responsible for leading the initiative, and also assigned other federal agencies as partners in carrying out the initiative. OMB initially approved Project SAFECOM as an e–government initiative in October 2001. SAFECOM falls within the government-to-government portfolio, due to its focus on accelerating the implementation of interoperable public safety communications at all levels of government. As described in its 2002 e–government report, OMB planned for SAFECOM to address critical shortcomings in efforts by public safety agencies to achieve interoperability and eliminate redundant wireless communications networks. OMB also stated that the project was expected to save lives and lead to better-managed disaster response, as well as result in billions of dollars in budget savings from “right-sized” federal communications networks and links to state networks . In order to effectively carry out their normal duties and respond to extraordinary events such as natural disasters and domestic terrorism, public safety agencies need the ability to communicate with those from other disciplines and jurisdictions. However, the wireless communications used today by many police officers, firefighters, emergency medical personnel, and other public safety agencies do not provide such capability, which hinders their ability to respond. For example, emergency agencies responding to events such as the bombing of the federal building in Oklahoma City and the attacks of September 11, 2001, experienced difficulties while trying to communicate with each other. Historically, the ability of first responders to communicate with those from other disciplines and jurisdictions has been significantly hampered because they often use different and incompatible radio systems operating on different frequencies of the radio spectrum. In February 2003, the National Task Force on Interoperability estimated the number of emergency response officials in the United States—also called first-responders—at about 2.5 million, working for 50,000 different agencies, such as law enforcement organizations, fire departments, and emergency medical services. Response to an emergency may involve any or all of these disciplines, as well as may additional personnel from the transportation, natural resources, or public utility sectors. A complex array of challenges affects the government’s ability to address the emergency communications interoperability problem. In addition to the vast number of distinct governmental entities involved, the National Task Force on Interoperability identified a variety of additional barriers, including the fragmentation and limited availability of radio communications spectrum for dedicated use by emergency personnel, incompatible and aging communications equipment, limited equipment standards within the public safety community, and the lack of appropriate life-cycle funding strategies. These barriers have been long-standing, and fully overcoming them will not be accomplished easily or quickly. Figure 1 summarizes the challenge of achieving seamlessly interoperable communications among the many personnel and organizations responding to an emergency. In some cases, first responders have resorted to stopgap measures to overcome communications problems. For example, some may swap radios with another agency at the scene of an emergency, others may relay messages through a common communications center, and still others may employ messengers to physically carry information from one group of responders to another. However, these measures have not always been adequate. The National Task Force on Interoperability identified several cases where the inability to communicate across agencies and jurisdictions in emergency situations was a factor in the loss of lives or delayed emergency response. Over the last decade, several federal programs have been established to address various aspects of public safety communications and interoperability. Among these was the Public Safety Wireless Network (PSWN) program—originally developed as a joint undertaking of the departments of Justice and the Treasury. PSWN’s focus was to promote state and local interoperability by establishing a technical resource center, collecting and analyzing data related to the operational environment of public safety communications, and initiating pilot projects to test and refine interoperable technology. Another similar initiative is the Advanced Generation of Interoperability for Law Enforcement (AGILE) program, which is run by the Department of Justice’s National Institute of Justice. AGILE was created to coordinate interoperability research within the Department of Justice and with other agencies and levels of government. AGILE has four main activities: (1) supporting research and development, (2) testing and evaluating pilot technologies, (3) developing standards, and (4) educating end users and policymakers. With roughly 100 agencies that use radio communications in law enforcement activities, the federal government also has a need for interoperable communications, both internally among its own departments and agencies and with state and local entities. This need has grown since the attacks of September 11, 2001, which blurred the distinctions between public safety and national security, and has placed federal entities such as the Federal Bureau of Investigation, the U.S. Secret Service, and the U.S. Coast Guard into broader public safety roles. As a result, federal public safety personnel have an increased need to be able to communicate directly with one another and with their state and local counterparts. After more than 2 years, Project SAFECOM has made very limited progress in addressing its overall objective of achieving communications interoperability among entities at all levels of government. SAFECOM’s lack of progress has prevented it from achieving the benefits that were expected of it as one of the 25 OMB-sponsored e–government initiatives, including improving government efficiency and realizing budgetary savings. Two factors have contributed significantly to the project’s limited results. First, there has been a lack of sustained executive leadership, as evidenced by multiple shifts in program responsibility and management staff. Second, the project has not achieved the level of collaboration necessary for a complex cross-government initiative of this type. In recent months, the current project team has pursued various near-term activities that are intended to lay the groundwork for future interoperability, including establishing a governance structure that emphasizes collaboration with stakeholders and developing grant guidance for use with awards to public safety agencies that encourage planning for interoperability. However, it has not yet reached written agreements with several of its major stakeholders on their roles in the project or established a stable funding mechanism. Until these weaknesses are addressed, SAFECOM’s ability to achieve its ultimate goal of improving interoperable communications will remain in doubt. When the e–government initiative was launched in 2002, OMB identified achieving public safety interoperability and reducing redundant wireless communications infrastructures as the goal for Project SAFECOM. Specifically, SAFECOM was to achieve federal-to-federal interoperability throughout the nation, achieve federal-to-state/local interoperability throughout the nation, and achieve state/local interoperability throughout the nation. As of March 2004, Project SAFECOM has made very limited progress in addressing its overall objective of achieving communications interoperability among entities at all levels of government. Specifically, project officials could provide no specific examples of cases where interoperability had been achieved as a direct result of SAFECOM activities. Furthermore, program officials now estimate that a minimum level of interoperability will not occur until 2008, and full interoperability will not occur until 15 years later, in 2023. OMB expected SAFECOM’s value to citizens to include saved lives and better managed disaster response; however, because of the program’s limited progress, these benefits have not yet been achieved. OMB also forecasted that a reduction in the number of communications devices and their associated maintenance and training would result in cost savings, including “billions” in federal savings. Project officials are currently conducting a study to estimate potential federal savings, such as savings from reducing equipment purchases. However, according to the program manager, federal savings in the billions of dollars are not likely. He added, however, that state and local agencies could realize significant savings if they could rely on Project SAFECOM to conduct consolidated testing of equipment for compliance with interoperability standards. Finally, on the issue of federal agency efficiency, the project has achieved mixed results. Although SAFECOM absorbed the projects and functions of PSWN, it has not consolidated the functions of Project AGILE, despite the similarities between the two programs’ activities. According to SAFECOM’s manager, the project lacks the authority to consolidate additional programs. As we have identified in previous work, successful organizations foster a committed leadership team and plan for smooth staff transitions. The transition to modern management requires sustained, committed leadership on the part of agency executives and managers. As in the case with well-run commercial entities, strong leadership and sound management are central to the effective implementation of public-sector policies or programs, especially transformational programs such as the OMB-sponsored e–government initiatives. Instead of sustained management attention, SAFECOM has experienced frequent changes in management, which have hampered its progress. OMB originally designated the Department of the Treasury, which was already involved in overseeing PSWN, as the project’s managing partner. As originally conceived, SAFECOM would build on PSWN’s efforts to achieve interoperability among state and local agencies by building an interoperable federal communications network. However, in May 2002, the Federal Emergency Management Agency (FEMA), which had an emergency-response mission more closely aligned with SAFECOM’s goals, was designated managing partner. At that time, project staff focused their efforts on securing funding and beginning outreach to stakeholders such as the AGILE program and associations representing local emergency agencies. By September 2002, the Federal Emergency Management Agency had replaced its SAFECOM management team and shifted its implementation strategy to focus on helping first responders make short- term improvements in interoperability using vehicles such as demonstration projects and research. At that time, development of an interoperable federal first-responder communications system was seen as a long-term goal. Following the establishment of DHS, in May 2003, the project was taken out of the Federal Emergency Management Agency and assigned to the department’s new Science and Technology Directorate because of a perceived need to incorporate more technical expertise. At that time, the project was assigned to a fourth management team. Figure 2 summarizes the major management changes that have occurred throughout Project SAFECOM’s history. This lack of sustained, committed executive leadership hampered SAFECOM’s ability to produce results tied to its overall objective. The changing of project teams approximately every 6 to 9 months has meant that much of the effort spent on the project has been made repeatedly to establish administrative structures, develop program plans, and obtain stakeholder input and support. Additionally, according to the project manager of PSWN, the changes in leadership have led to skepticism among some of the project’s stakeholders that the project’s goals can be met. The ability of Project SAFECOM to meet its overall objective has also been hampered by inadequate collaboration with the project’s stakeholders. As an umbrella program meant to coordinate efforts by various federal, state, and local agencies to achieve interoperability, SAFECOM’s success relies on cross-agency collaboration. As we have previously reported, cross- organizational initiatives such as this require several conditions to be successful, including: (1) a collaborative management structure; (2) clear agreements among participants on purpose, outcomes, and performance measures; (3) shared contribution of resources; and (4) a common set of operating standards. While the project’s current management team has made progress in developing a collaborative management structure, SAFECOM does not yet have other necessary structures or agreements in place. Its previous management teams worked on creating a collaborative management structure by, for example, seeking input from stakeholders and drafting a memorandum of understanding among the departments of Homeland Security, Justice, and the Treasury, but these activities were not completed at the time of the transition to DHS. Since taking control of the project in May 2003, Project SAFECOM has pursued a number of activities that stress collaboration and are intended to lay the groundwork for future interoperability, according to its current manager. Specifically, DHS established a governance structure for the project in November 2003 that includes executive and advisory committees to formalize collaboration with stakeholders and provides a forum for significant input on goals and priorities by federal agencies and state and local representatives. The department has also conducted several planning conferences meant to identify project stakeholders to reach agreements with them on the program’s purpose and intended outcomes. One such conference, in December 2003, provided an opportunity for stakeholders to modify program goals and the tasks planned to address them. The program manager also cited a statement of support by several organizations representing local first responders as evidence that the current structure is achieving effective collaboration. In addition, project officials are working with the Commerce Department to catalog all existing federal agencies that use public safety communications systems and networks. Further, program officials noted that the SAFECOM project developed grant guidance that promotes interoperability by requiring public safety agencies to describe specific plans for achieving improved interoperability when applying for grants that fund communications equipment. This guidance represents a positive step, but it does not provide public safety agencies with complete specifications for achieving interoperability. Specifically, the guidance strongly encourages applicants to ensure that purchased equipment complies with a technical standard for interoperable communications equipment that has not yet been finalized and that, according to program officials, addresses only part of the interoperability problem. This guidance has already been incorporated into grants awarded by the Department of Justice’s Office of Community Oriented Policing Services and the Federal Emergency Management Agency. However, Project SAFECOM has not yet fulfilled other conditions necessary for successful cross-government collaboration. First, project officials have not signed memorandums of agreement with all of the project’s stakeholders. As shown in table 1, agreements were completed on funding or program participation with five agencies in fiscal year 2003. However, DHS did not reach a 2003 agreement with the Department of the Interior or the Department of Justice, both agencies designated as funding partners. According to the SAFECOM program manager, the Department of the Interior has not fully determined the extent of its expected participation in the program, and the Department of Justice had to delay its agreement until it received approval to reprogram the necessary funds. Justice has reached an agreement with DHS for fiscal year 2004, but as of March 2004, none of the other funding partners have signed agreements covering the current year. In addition, although other federal agencies and the organizations representing state and local stakeholders are represented in SAFECOM’s governing structure and some have expressed support for the program, none has reached an agreement with DHS that commits it to provide nonfinancial assistance to the project. Finally, those agreements that were in place did not address key program parameters, such as specific program outcomes or performance measures. While the program’s stakeholders agreed to a broad set of goals and expected outcomes at the December planning meeting, as of March 2004, there was no agreement on performance measures for them. According to the program manager, new performance measures were under development. Second, while effective collaboration requires the sharing of resources, DHS had not received all of the funding it planned to receive from its federal partners. During fiscal year 2003, SAFECOM received only about $17 million of the $34.9 million in funding OMB allocated to it from these funding partners. About $1.4 million of that $17 million was not received until late September 2003, when only a week remained in the fiscal year. According to program officials, these funding shortfalls and delays resulted in the program’s having to delay some of the tasks it had intended to complete, such as identifying the project’s major milestones. Finally, although DHS has not yet developed a common set of operating standards for SAFECOM, efforts to identify technical standards are underway, according to program officials. For example, program officials from SAFECOM and AGILE plan to accelerate the development of an incomplete standard for interoperable communications equipment that is cited in SAFECOM's grants guidance. Program officials are also developing a document describing the requirements for public safety communications interoperability, which is intended to form the basis for future technical development efforts. SAFECOM also is supporting several demonstration projects and vendor presentations to publicize currently available interoperable systems. The absence of many aspects of successful collaboration could hamper SAFECOM officials’ ability to achieve the program’s goals. For example, the lack of written agreements with some stakeholders raises concerns about the extent to which those agencies are willing to contribute to the program’s success. Also, until performance measures and technical standards are finalized and implemented, it will be difficult to determine the extent of any progress. Should such difficulties continue to hamper the program’s progress in fulfilling its overall goals, solutions to the problems of public safety interoperability will be further delayed. While the lack of rapid progress in improving interoperable communications among first responders may be understandable, considering the complexity of the issues and the number of entities involved, federal efforts to address the issue as an e–government initiative have been unnecessarily delayed by management instability and weaknesses in collaboration. Since taking over management of the project in May 2003, DHS has shown greater executive commitment to the project than had previously been demonstrated. The agency has determined that a long-term, intergovernmental effort will be needed to achieve the program’s overall goal of improving emergency response through broadly interoperable first-responder communications systems, and it has taken steps to lay the groundwork for this by creating a governance structure allowing for significant stakeholder input on program management. However, DHS has made less progress in establishing written agreements with other government agencies on responsibilities and resource commitments. The DHS effort could experience difficulties if it does not reach such agreements, which have proven essential to the success of other similarly complex, cross-agency programs. To enhance the ability of Project SAFECOM to improve communications among emergency personnel from federal, state, local, and tribal agencies, we recommend that the Secretary of Homeland Security direct the Under Secretary for Science and Technology to complete written agreements with the project’s identified stakeholders, including federal agencies and organizations representing state and local governments. These agreements should define the responsibilities and resource commitments that each of those organizations will assume and include specific provisions that measure program performance. In written comments on a draft of this report, which are reprinted in appendix I, the Department of Homeland Security’s GAO liaison agreed that the lack of interoperable communications hampers emergency response. The official also provided additional information about activities undertaken by the current program management team since May 2003, including the implementation of a management structure that includes state and local stakeholders, the ongoing development of technical standards, and development of a database to track federal interoperability efforts. We discuss these activities in our report. Regarding our draft recommendation, this official indicated that DHS has provided draft agreements to SAFECOM’s federal funding partners, and added that DHS supports the need for further delineation of responsibilities and funding in future MOUs. Until DHS reaches specific agreements with all of SAFECOM’s stakeholders, including nonfunding federal partners and state and local partners, its ability to achieve its objectives will continue to be hindered. The official also stated that DHS agrees that performance measures are essential for adequate program management, and added that SAFECOM had developed a strategic performance management tool. However, DHS did not provide any evidence that SAFECOM had determined the specific performance measures that will be used to assess progress against its goals, or the process for applying them. Until such measures are implemented, program managers will be unable to determine the impact of their efforts. We also made technical corrections, as appropriate, in response to DHS’s comments. We plan to send copies to this report to the Ranking Minority Member, House Committee on Government Reform; the Ranking Minority Member, Subcommittee on Technology, Information Policy, Intergovernmental Relations and the Census; and the Ranking Minority Member, Subcommittee on National Security, Emerging Threats and International Relations. In addition, we will provide copies to the Secretary of Homeland Security and the Director of OMB. Copies will also be available without charge on GAO’s Web site at www.gao.gov. Should you have any questions concerning this report, please call me at (202) 512-6240 or John de Ferrari, Assistant Director, at (202) 512-6335. We can also be reached by e-mail at koontzl@gao.gov and deferrarij@gao.gov, respectively. Other key contributors to this report were Felipe Colón, Jr., Neil Doherty, Michael P. Fruitman, Jamie Pressman, and James R. Sweetman, Jr. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e- mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
One of the five priorities in the President's Management Agenda is the expansion of electronic government (e-government)--the use of Internet applications to enhance access to and delivery of government information and services. Project SAFECOM is one of the 25 initiatives sponsored by the Office of Management and Budget (OMB) to implement this agenda. Managed by the Department of Homeland Security, the project's goal is to achieve interoperability among emergencyresponse communications at all levels of government, while at the same time realizing cost savings. GAO assessed the government's progress in implementing Project SAFECOM. While its overall objective of achieving communications interoperability among emergency response entities at all levels of government is a challenging task that will take many years to fully accomplish, Project SAFECOM, in its 2-year history, has made very limited progress in addressing this objective. OMB's e-government objectives of improving operating efficiency and achieving budgetary savings within federal programs have also been largely stymied. Two major factors have contributed to the project's limited progress: (1) lack of consistent executive commitment and support, and (2) an inadequate level of interagency collaboration. In its 2 1/2-year history, Project SAFECOM has had four different management teams in three different agencies. In recent months, the current project team has pursued various near-term activities that are intended to lay the groundwork for future interoperability, including establishing a governance structure that emphasizes collaboration with stakeholders and developing guidance for making grants that can be used to encourage public safety agencies to plan for interoperability. However, it has not yet reached written agreements with several of its major stakeholders on their roles in the project or established a stable funding mechanism. Until these shortcomings are addressed, the ability of Project SAFECOM to deliver on its promise of improved interoperability and better response to emergencies will remain in doubt.
As the primary federal agency that is responsible for protecting and securing GSA facilities and federal employees across the country, FPS has the authority to enforce federal laws and regulations aimed at protecting federally owned and leased properties and the persons on such property and, among other things, to conduct investigations related to offenses against the property and persons on the property. To protect the over 1 million federal employees and about 9,000 GSA facilities from the risk of terrorist and criminal attacks, in fiscal year 2007, FPS had about 1,100 employees, of which 541, or almost 50 percent, were inspectors, as shown in figure 1. FPS also has about 15,000 contract guards. FPS inspectors are primarily responsible for responding to incidents and demonstrations, completing BSAs for numerous buildings, serving as contracting officer technical representatives (COTR) for collecting and reviewing time cards for guards, and participating in tenant agencies’ BSC meetings. FPS police officers are primarily responsible for responding to criminal incidents, assisting in the monitoring of contract guards, and responding to demonstrations at GSA facilities and conducting basic criminal investigations. FPS physical security specialists, who do not have law enforcement authority, are responsible for participating in tenant agencies’ BSC meetings, and assisting in the monitoring of contract guard services. Special agents are the lead entity within FPS for gathering intelligence for criminal and antiterrorist activities, and planning and conducting investigations relating to alleged or suspected violations of criminal laws against GSA facilities and their occupants. According to FPS, its 15,000 contract guards are used primarily to monitor facilities through fixed post assignments and access control. According to FPS policy documents, contract guards may detain individuals who are being seriously disruptive, violent, or suspected of committing a crime at a GSA facility, but do not have arrest authority. FPS provides law enforcement and physical security services to its customers. Law enforcement services provided by FPS include proactive patrol and responding to incidents in or around GSA facilities. Physical security services provided by FPS include the completion of BSAs, oversight of contract guards, participation in BSC meetings, and the recommendation of security countermeasures. The level of physical protection services FPS provides at each of the approximately 9,000 facilities varies depending on the facility’s security level. To determine a facility’s security level, FPS uses the Department of Justice’s (DOJ) Vulnerability Assessment Guidelines, which are summarized below. A level I facility has 10 or fewer federal employees, 2,500 or fewer square feet of office space, and a low volume of public contact or contact with only a small segment of the population. A typical level I facility is a small storefront-type operation, such as a military recruiting office. A level II facility has between 11 and 150 federal employees; more than 2,500 to 80,000 square feet; a moderate volume of public contact; and federal activities that are routine in nature, similar to commercial activities. A level III facility has between 151 and 450 federal employees, more than 80,000 to 150,000 square feet and a moderate to high volume of public contact. A level IV facility has over 450 federal employees; more than 150,000 square feet; a high volume of public contact; and tenant agencies that may include high-risk law enforcement and intelligence agencies, courts, judicial offices, and highly sensitive government records. A Level V facility is similar to a Level IV facility in terms of the number of employees and square footage, but contains mission functions critical to national security. FPS does not have responsibility for protecting any level V buildings. On the basis of the DOJ Vulnerability Assessment Guidelines, FPS categorized the approximately 9,000 GSA facilities in its portfolio into five security levels, as shown in figure 2. FPS also follows DOJ guidance for completing BSAs. DOJ guidance states that BSAs are required to be completed every 2 to 4 years, depending on the security level of the building. For example, a BSA for a level IV building is completed every 2 years and every 4 years for a level I building. As part of each assessment, the inspector is required to conduct an on-site physical security analysis using FPS’s Federal Security Risk Manager (FSRM) methodology and interview the Chairman and each member of the BSC, GSA realty specialists, designated officials of tenant agencies, site security supervisors, and building managers. After completing their assessments, inspectors make recommendations to the BSC for building security countermeasures. The BSC is responsible for approving the recommended countermeasures. In some cases, FPS has delegated the protection of facilities to tenant agencies, which may have their own law enforcement authority or may contract separately for guard services. FPS is a reimbursable organization and is fully funded by collecting security fees from tenant agencies. To fund its operations, FPS charges each tenant agency a basic security fee per square foot of space occupied in a GSA facility. In 2008, the basic security fee is 62 cents per square foot and covers services such as patrol, monitoring of building perimeter alarms, and dispatching of law enforcement response through its control centers, criminal investigations, and BSAs. FPS also collects an administrative fee it charges tenant agencies for building specific security services such as access control to facilities’ entrances and exits; employee and visitor checks; and the purchase, installation, and maintenance of security equipment, including cameras, alarms, magnetometers, and X-ray machines. In addition to these security services, FPS provides agencies with additional services upon request, which are funded through reimbursable Security Work Authorizations (SWA), for which FPS charges an administrative fee. For example, agencies may request additional magnetometers or more advanced perimeter surveillance capabilities. While FPS’s fiscal year 2008 annual budget totals $1 billion, for the purposes of this report we are focusing on the fees FPS estimates it will collect for the security services it provides to tenant agencies. For example, in fiscal year 2008, FPS estimates collections will total about $230 million, of which $187 million will be from its basic security services, $23 million from building specific services, and $20 million from SWAs, as shown in figure 3. FPS currently faces several operational challenges, such as a decrease in staff, that make it difficult to accomplish its facility protection mission. This decrease in staff has affected FPS’s ability to provide mission-critical services such as proactive patrol, contract guard oversight, and quality BSAs in a timely manner. FPS is taking steps to address these challenges. For example, FPS is moving to an inspector-based workforce, hiring 150 additional inspectors, and developing a new system to improve the quality and timeliness of BSAs. However, these actions may not fully resolve FPS’s operational challenges. Providing law enforcement and physical security services to GSA facilities is inherently labor intensive and requires effective management of available staffing resources. However, since transferring from GSA to DHS, FPS’s staff has declined and the agency has managed its staffing resources in a manner that has diminished security at GSA facilities and increased the risk of crime or terrorist attacks at many GSA facilities. Specifically, FPS’s staff has decreased by about 20 percent, from almost 1,400 employees at the end of fiscal year 2004 to about 1,100 employees at the end of fiscal year 2007, as shown in figure 4, while the number of buildings FPS has responsibility for has increased from approximately 8,800 to about 9,000. Over 60 percent of the decrease in staffing occurred in fiscal year 2007, when FPS’s staff decreased by about 170 employees because FPS offered voluntary early retirement, detailed assignments to other ICE and DHS components, and did not replace positions that were lost to attrition. In fiscal year 2008, FPS initially planned to reduce its staff further. However, a provision in the 2008 Consolidated Appropriations Act requires FPS to increase its staff to 1,200 by July 31, 2008. According to FPS’s Director, the agency expects to meet this requirement. In addition, according to the Director of FPS, in fiscal year 2010 it plans to increase its staff to 1,450. Between fiscal years 2004 and 2007, the number of employees in each position also decreased, with the largest decrease occurring in the police officer position. For example, the number of police officers decreased from 359 in fiscal year 2004 to 215 in fiscal year 2007, and the number of inspectors decreased from 600 in fiscal year 2004 to 541 at the end of fiscal year 2007, as shown in figure 5. At many facilities FPS has eliminated proactive patrol of GSA facilities to prevent or detect criminal violations. The FPS Policy Handbook states that patrol should be used to prevent crime and terrorist actions and delegates responsibility for determining the frequency and location of patrols to FPS’s Regional Directors. The elimination of proactive patrol has a negative effect on security at GSA facilities because law enforcement personnel cannot effectively monitor individuals surveilling federal buildings, inspect suspicious vehicles (including potential vehicles for bombing federal buildings), and detect and deter criminal activity in and around federal buildings. While the number of contract guards employed in GSA facilities will not be decreased, most are stationed at fixed posts, which they are not permitted to leave, and do not have arrest authority. According to a FPS policy document, contract guards are authorized to detain individuals. However, according to some regional officials, some contract guards do not exercise their detention authority because of liability concerns. According to some FPS officials at regions we visited, not providing proactive patrol has limited its law enforcement personnel to a reactive force. In addition, FPS officials at several regions we visited said that proactive patrol has, in the past, allowed its police officers and inspectors to identify and apprehend individuals that were surveilling GSA facilities. In contrast, when FPS is not able to patrol federal buildings, there is increased potential for illegal entry and other criminal activity at federal buildings. For example, in one city we visited, a deceased individual had been found in a vacant GSA facility that was not regularly patrolled by FPS. FPS officials stated that the deceased individual had been inside the building for approximately 3 months. Reports issued by multiple government entities acknowledge the importance of proactive patrol in detecting and deterring terrorist surveillance teams, which frequently use information such as the placement of armed guards and proximity to law enforcement agency stations when choosing targets and planning attacks. These sophisticated surveillance and research techniques can be derailed by active law enforcement patrols in and around federal facilities. According to several inspectors and police officers in one FPS region, proactive patrol is important in their region because, in the span of 1 year, there were 72 homicides within three blocks of a major federal office building and because most of the crime in their area takes place after hours, when there are no FPS personnel on duty. Tenant representatives in some regions we visited have noticed a decline in FPS’s law enforcement presence in recent years and believe this has negatively affected security. For example, one tenant stated that FPS used to provide proactive patrols in the area at night and on weekends but stopped in early 2006. Most tenant representatives we interviewed believe that FPS’s law enforcement function is highly valued and would like to see more police officers patrolling their facilities. In addition to eliminating proactive patrol, many FPS regions have reduced their hours of operation for providing law enforcement services in multiple locations, which has resulted in a lack of coverage when most federal employees are either entering or leaving federal buildings or on weekends when some facilities remain open to the public. Moreover, FPS police officers and inspectors in two cities explained that this lack of coverage has left some federal day care facilities vulnerable to loitering by homeless individuals and drug users. Some FPS police officers and inspectors also said that reducing hours has increased response time in some locations by as much as a few hours to a couple of days, depending on the location of the incident. For example, one consequence of reduced hours is that some police officers often have to travel from locations in another state in order to respond to incidents in both major metropolitan and rural locations. The decrease in FPS’s duty hours has jeopardized police officer and inspector safety, as well as building security. Some FPS police officers and inspectors said that they are frequently in dangerous situations without any FPS backup because many FPS regions have reduced their hours of operation and overtime. In one region, FPS officials said that a public demonstration in a large metropolitan area required that all eight police officers and inspectors scheduled to work during the shift be deployed to the demonstration for crowd control. During the demonstration, however, two inspectors had to leave the demonstration to arrest a suspect at another facility; two more also left to respond to a building alarm. Four FPS personnel remained to cover the demonstration, a fact that contributed to an unsafe environment for FPS staff. Additionally, FPS has a decreased capacity to handle situations in which a large FPS presence is needed while maintaining day-to-day operations. For example, during a high-profile criminal trial, approximately 75 percent of one region’s workforce was detailed to coordinate with local law enforcement agencies and other federal law enforcement agencies to provide perimeter security for a courthouse, leaving few FPS police officers and inspectors to respond to criminal incidents and other tenant needs in the rest of the region. This problem was also reported by inspectors in several other regions in the context of providing law enforcement at public demonstrations and criminal trials, which can occur frequently at some GSA facilities. According to FPS, in September 2007, it drafted a policy that created Crisis Response Teams, which will handle situations in which a large FPS presence is needed. Contract guard inspections are important for several reasons, including ensuring compliance with contract requirements, guards have up-to-date certifications for required training including firearms or cardio pulmonary resuscitation, and guards are completing assigned duties. FPS policy states that guard posts should be inspected frequently, and some FPS officials have stated that guard posts should be inspected once per month. However, some posts are inspected less than once per year, in part because contract guards are often posted in buildings hours or days away from the nearest FPS inspector. For example, one area supervisor reported guard posts that had not been inspected in 18 months, while another reported posts that had not been inspected in over 1 year. In another region, FPS inspectors and police officers reported that managers told them to complete “2820” guard inspections over the telephone, instead of in person. In addition, when inspectors do perform guard inspections, they do not visit the post during each shift; consequently some guard shifts may never be inspected by an FPS official. As a result, some guards may be supervised exclusively by a representative of the contract guard company. Moreover, in one area we visited with a large FPS presence officials reported difficulty in getting to every post within the required 1-month period. We obtained a copy of a contract guard inspection schedule in one metropolitan city that showed 20 of 68 post inspections were completed for the month. Some tenant agencies have also noticed a decline in the level of guard oversight in recent years and believe this has led to poor performance on the part of some contract guards. For example, in one city, tenant representatives in a major federal building stated that many of the tenants complain about the quality of contract guard services because they do not have enough guidance from FPS inspectors, and as a result, there have been several security breaches, such as stolen property. According to Federal Bureau of Investigation (FBI) and GSA officials in one of the regions we visited, contract guards failed to report the theft of an FBI surveillance trailer worth over $500,000, even though security cameras captured the trailer being stolen while guards were on duty. The FBI did not realize it was missing until 3 days later. Only after the FBI started making inquiries did the guards report the theft to FPS and the FBI. During another incident, FPS officials reported contract guards—who were armed—taking no action as a shirtless suspect wearing handcuffs on one wrist ran through the lobby of a major federal building while being chased by an FPS inspector. In addition, one official reported that during an off- hours alarm call to a federal building, the official arrived to find the front guard post empty, while the guard’s loaded firearm was left unattended in the unlocked post. We also personally witnessed an incident in which an individual attempted to enter a level IV facility with illegal weapons. According to FPS policies, contract guards are required to confiscate illegal weapons, detain and question the individual, and notify FPS. In this instance, the weapons were not confiscated, the individual was not detained or questioned, FPS was not notified, and the individual was allowed to leave with the weapons. Building security assessments, which are completed by both inspectors and physical security specialists, are the core component of FPS’s physical security mission. However, ensuring the quality and timeliness of them is an area in which FPS continues to face challenges. Many inspectors in the seven regions we visited stated that they are not provided sufficient time to complete BSAs. For example, while FPS officials have stated that BSAs for level IV facilities should be completed in 2 to 4 weeks, several inspectors reported having only 1 or 2 days to complete assessments for their buildings because of pressure from supervisors to complete BSAs as quickly as possible. For example, 1 region is attempting to complete more than 100 BSAs by June 30, 2008, 3 months earlier than required, because staff will be needed to assist with a large political event in the region. In addition, one inspector in this region reported having one day to complete site work for six BSAs in a predominately rural state in the region. Some regional supervisors have also found problems with the accuracy of BSAs. One regional supervisor reported that an inspector was repeatedly counseled and required to redo BSAs when supervisors found he was copying and pasting from previous BSAs. Similarly, one regional supervisor stated that in the course of reviewing a BSA for an address he had personally visited, he realized that the inspector completing the BSA had not actually visited the site because the inspector referred to a large building when the actual site was a vacant plot of land owned by GSA. According to FPS, the Director of FPS issued a memorandum in December 2007 emphasizing the importance of conducting BSAs in an ethical manner. A 2006 report prepared by ICE’s Office of Professional Responsibility on performance in one FPS region found that nearly all of the completed BSAs were missing interviews with required stakeholders and that Interagency Security Committee security design criteria were not used in the assessment of a preconstruction project as required. Additionally, several tenant agencies stated that they are using or plan to find contractors to complete additional BSAs because of concerns with the quality and timeliness of the assessments completed by FPS. Furthermore, we have previously reported that several DHS components are completing their own BSAs over and above the assessment completed by FPS because FPS’s assessments are not always timely or lack quality. Similarly, many facilities have received waivers from FPS to enable the agencies to complete their own BSAs, and the lack of FPS personnel with top secret clearances has led to an increase in the number of agencies granted waivers for BSAs. Some GSA and FPS officials have stated that inspectors lack the training and physical security expertise to prepare BSAs according to the standards. Currently, inspectors receive instructions on how to complete BSAs as part of a 4-week course at the Federal Law Enforcement Training Center’s Physical Security Training Program. However, many inspectors and supervisors in the regions we visited stated that this training is insufficient and that refresher training is necessary in order to keep inspectors informed about emerging technology, but this refresher training has not been provided in recent years. Regional GSA officials also stated that they believe the physical security training provided to inspectors is inadequate and that it has affected the quality of BSAs they receive. FPS officials have stated the Physical Security Training Program curriculum is currently being revised and the agency recently conducted a 1-week physical security refresher course in one region and plans to conduct this training in three others. FPS’s ability to ensure the quality and timeliness of BSAs is also complicated by challenges with the current risk assessment tool. We have previously reported that there are three primary concerns with the FSRM system, the tool FPS currently uses to conduct BSAs. First, it does not allow FPS to compare risks from building to building so that security improvements to buildings can be prioritized. Second, current risk assessments need to be categorized more precisely. According to FPS, too many BSAs are categorized as high or low, which does not allow for a refined prioritization of security improvements. Third, FSRM does not allow for tracking the implementation status of security recommendations based on assessments. According to FPS, GSA, and tenant agency officials in the regions we visited, some of the security countermeasures, such as security cameras, magnetometers, and X-ray machines at some facilities, as well as some FPS radios and BSA equipment, have been broken for months or years and are poorly maintained. At one level IV facility, FPS and GSA officials stated that 11 of 150 security cameras were fully functional and able to record images. Similarly, at another level IV facility, a large camera project designed to expand and enhance an existing camera system was put on hold because FPS did not have the funds to complete the project. FPS officials stated that broken cameras and other security equipment can negate the deterrent effect of these countermeasures as well as eliminate their usefulness as an investigative tool. For example, according to FPS, it has investigated significant crimes at multiple level IV facilities, but the security cameras installed in those buildings were not working properly, preventing FPS investigators from identifying the suspects. Complicating this issue, FPS officials, GSA officials, and tenant representatives stated that additional countermeasures are difficult to implement because they require approval from BSCs, which are composed of representatives from each tenant agency who generally are not security professionals. In some of the buildings that we visited, security countermeasures were not implemented because BSC members cannot agree on what countermeasures to implement or are unable to obtain funding from their agencies. For example, a FPS official in a major metropolitan city stated that over the last 4 years inspectors have recommended 24-hour contract guard coverage at one high-risk building located in a high crime area multiple times, however, the BSC is not able to obtain approval from all its members. In addition, several FPS inspectors stated that their regional managers have instructed them not to recommend security countermeasures in BSAs if FPS would be responsible for funding the measures because there is not sufficient money in regional budgets to purchase and maintain the security equipment. FPS is taking steps to address the operational challenges it faces. For example, FPS is implementing a plan to move to an inspector-based workforce, hiring 150 additional inspectors, and plans to develop and implement a new system to improve the quality and timeliness of BSAs. However, these actions may not fully resolve its operational challenges because, for example, some inspectors may not be able to perform both law enforcement and physical security duties simultaneously. In 2007, FPS decided to adopt an inspector-based workforce approach to protect GSA facilities. According to FPS, this approach will provide it with the capabilities and flexibility to perform law enforcement and physical security services. Under the inspector-based workforce approach, the composition of FPS’s workforce will change from a combination of inspectors and police officers to mainly inspectors; FPS will place more emphasis on physical security, such as BSAs, and less emphasis on the law enforcement part of its mission; contract guards will continue to be the front-line defense for protection at GSA facilities; and there will be a continued reliance on local law enforcement. According to FPS, this approach will allow it to focus on enforcing the Interagency Security Committee’s security standards, complete BSAs in a timely manner, manage the contract guard program, and test security standards. While FPS’s current workforce includes police officers, inspectors, criminal investigators/special agents, and support staff; police officers will be phased out under FPS’s new approach. Inspectors will be required to complete law enforcement activities such as patrolling and responding to incidents at GSA facilities in addition to their physical security activities. Special agents will continue to be responsible for conducting investigations. According to FPS, an inspector-based workforce will help it to achieve its strategic goals such as ensuring that its staff has the right mix of technical skills and training needed to accomplish its mission and building effective relationships with its stakeholders. The inspector-based workforce approach presents some additional challenges for FPS and may exacerbate some of its long-standing challenges. For example, the approach does not emphasize law enforcement responsibilities, such as proactive patrol. In addition, having inspectors perform both law enforcement and physical security duties simultaneously may prevent some inspectors from responding to criminal incidents in a timely manner and patrolling federal buildings. For example, some officials stated that if inspectors are in a meeting with tenants, it will take them more time to get in their vehicle and drive to respond to an incident than it would for a police officer who is already in a car or on the street patrolling and that given the difficulty with scheduling meetings with BSCs, inspectors may decide not to respond to a nonviolent or non- emergency situation at another facility. However, according to FPS headquarters officials, if MegaCenter protocols are followed, this situation will not occur because another inspector would be called to respond to the incident. In April 2007, a DHS official and several FPS inspectors testified before Congress that FPS’s inspector-based workforce will require increased reliance on state and local law enforcement agencies for assistance with crime and other incidents at GSA facilities and that FPS would seek to enter into memorandums of agreement with local law enforcement agencies. However, according to FPS’s Director, the agency recently decided not to pursue memorandums of agreement with local law enforcement agencies, in part because of reluctance on the part of local law enforcement officials to sign such memorandums and because 96 percent of the properties in FPS’s inventory are listed as concurrent jurisdiction facilities where both federal and state governments have jurisdiction over the property. Under the Assimilative Crimes Act (ACA), state law may be assimilated to fill gaps in federal criminal law where the federal government has concurrent jurisdiction with the state. For properties with concurrent jurisdiction, both FPS and state and local law enforcement officers and agents are authorized to enforce state laws. FPS police officers, inspectors, and agents are also authorized by law to enforce federal laws and regulations for the protection of persons and property regarding property owned or occupied by the federal government, but state and local law enforcement officials would have no authority to enforce federal laws and regulations. As an alternative to memorandums of agreement, according to FPS’s Director, the agency will rely on the informal relationships that exist between local law enforcement agencies and FPS. However, whether this type of relationship will provide FPS with the type of assistance it will need under the inspector-based workforce is unknown. Representatives of seven of the eight local law enforcement agencies we visited were unaware of decreases in FPS’s workforce or its transition to an all- inspector workforce. Officials from five of the eight local law enforcement agencies we interviewed stated that their agencies did not have the capacity to take on the additional job of responding to incidents at federal buildings and stated that their departments were already strained for resources. Many of the FPS officials in the seven regions we visited also expressed concern about the potential lack of capacity on the part of local law enforcement. One regional FPS official, for example, reported that there have been incidents in which local law enforcement authorities refused to respond to certain types of calls, especially in one of the major cities in the region. The Secretary of Homeland Security is authorized by law to utilize the facilities and services of federal, state, and local law enforcement agencies with the consent of the agencies when the Secretary determines it to be economical and in the public interest. However, one local law enforcement official stated that even if the federal government were to reimburse the local law enforcement agency for services, the police department would not be able to hire and train enough staff to handle the extra responsibilities. Three local law enforcement agencies stated that they did not know enough about FPS’s activities in the area to judge their department’s ability to take over additional responsibility for FPS at GSA facilities. In addition, many FPS and local law enforcement officials in the regions we visited stated that jurisdictional authority would pose a significant barrier to gaining the assistance of local law enforcement agencies. Local law enforcement representatives also expressed concerns about being prohibited from entering GSA facilities with service weapons, especially courthouses. Similarly, local law enforcement officials in a major U.S. city stated that they cannot make an arrest or initiate a complaint on federal property, so they have to wait until a FPS officer or inspector arrives. FPS officials and local law enforcement agencies have cited confusion over the law as one reason for jurisdictional difficulties. FPS also provides facility protection to approximately 400 properties where the federal government maintains exclusive federal jurisdiction. Under exclusive federal jurisdiction, the federal government has all of the legislative authority within the land area in question and the state has no residual police powers. The ACA also applies to properties with exclusive federal jurisdiction, but unlike properties with concurrent jurisdiction, state and local law enforcement officials are not authorized to enforce state and local laws. Furthermore, like those properties with concurrent federal jurisdiction, state and local law enforcement officials would have no authority to enforce federal laws and regulations. Even if the Secretary of Homeland Security utilized the facilities and services of state and local law enforcement agencies, according to ICE’s legal counsel, state and local law enforcement officials would only be able to assist FPS in functions such as crowd and traffic control, monitoring law enforcement communications and dispatch, and training. In the 2008 Consolidated Appropriations Act, Congress included a provision requiring FPS to employ no fewer than 1,200 employees, 900 of which must be law enforcement personnel. To comply with this legislation, FPS is in the process of recruiting an additional 150 inspectors to reach the mandated staffing levels. These inspectors will be assigned to 8 of FPS’s 11 regions. According to the Director of FPS, the addition of 150 inspectors to its current workforce will allow FPS to resume providing proactive patrol and 24-hour presence based on risk and threat levels at some facilities. However, these additional 150 inspectors will not have an impact on the 3 regions that will not receive them. In addition, while this increase will help FPS to achieve its mission, this staffing level is still below the 1,279 employees that FPS had at the end of fiscal year 2006, when, according to FPS officials, tenant agencies experienced a decrease in service. In addition, in 2006, FPS completed a workforce study that recommended an overall staffing level of more than 2,700, including about 1,800 uniformed law enforcement positions (inspectors and police officers). According to this study, with 1,800 law enforcement positions FPS could, among other things, provide 24-hour patrol in 23 metropolitan areas, perform weekly guard post inspections, allow time for training, and participate on BSCs. FPS’s Risk Management Division is in the process of developing a new tool referred to as the Risk Assessment Management Program (RAMP) to replace its current system (FSRM) for completing BSAs. According to FPS, a pilot version of RAMP is expected to be rolled out in fiscal year 2009. RAMP will be accessible to inspectors via a secure wireless connection anywhere in the United States and will guide them through the process of completing a BSA to ensure that standardized information is collected on all GSA facilities. According to FPS, once implemented, RAMP will provide inspectors with accurate information that will enable them to make more informed and defensible recommendations for security countermeasures. FPS also anticipates that RAMP will allow inspectors to obtain information from one source, generate reports automatically, enable the agency to track selected countermeasures throughout their life cycle, address some issues with the subjectivity of BSAs, and reduce the amount of time spent on administrative work by inspectors and managers. FPS’s collections have not been sufficient to cover its projected operational costs in recent years, and FPS has faced projected shortfalls twice in the last 4 years. While FPS has taken actions to address these gaps, its actions have had adverse implications, including low morale among staff, increased attrition, and the loss of institutional knowledge as well as difficulties in recruiting new staff. Also, FPS’s primary means of funding its operations—the basic security fee—does not account for the risk faced by particular buildings and, depending on that risk, the level of service provided to tenant agencies or the cost of providing those services. FPS funds its operations through the collection of security fees charged to tenant agencies for security services. However, these fees have not been sufficient to cover its operational costs in recent years. FPS has addressed this gap in a variety of ways. When FPS was located at GSA it received additional funding from the Federal Buildings Fund to cover the gap between collections and costs. For example, in fiscal year 2003, the Federal Buildings Fund provided for the approximately $140 million difference between FPS’s collections and costs. Also, the first year after the transfer to DHS, fiscal year 2004, FPS needed $81 million from the Federal Buildings Fund to cover the difference between collections and the cost of operations. While fiscal year 2004 was the last year funding from the Federal Buildings Fund was available to support FPS, the agency continued to experience budgetary challenges because of the gap between operational costs and fee collections, and also because of increases in its support costs after the transfer to DHS. In fiscal year 2005, FPS was authorized to increase the basic security fee from 30 cents per square foot to 35 cents per square foot, providing approximately $15 million in additional collections. However, FPS’s collections were projected to be $70 million short of its operational costs that year. To make up for the projected shortfall and to avoid a potential Anti-deficiency Act violation, FPS instituted a number of cost-saving measures that included restricted hiring and travel, limited training and overtime, and no employee performance awards. Similarly, in fiscal year 2006, FPS faced another projected shortfall of $57 million. To address this projected shortfall, FPS maintained existing cost savings measures and DHS had to transfer $29 million in emergency supplemental funding to FPS. DHS’s Acting Undersecretary for Management stated that this funding was necessary to avoid a shortfall in fiscal year 2006, and to ensure that security at GSA facilities would not be jeopardized. In fiscal year 2007, FPS continued its cost-saving measures, which resulted in approximately $27 million in savings, and increased the basic security fee from 35 cents per square foot to 39 cents per square foot. Because of these actions, fiscal year 2007 was the first year that FPS did not face a projected shortfall. However, according to a FPS memo to the DHS Chief Financial Officer, FPS had recommended increasing the basic security fee to 49 cents per square foot in fiscal year 2007. The memo also stated that the increase was needed to avoid a dramatic reduction in the level of security provided at GSA facilities. A basic security fee of 49 cents per square foot in fiscal year 2007 would have provided approximately $30 million in additional collections, which might have negated the need to implement cost-saving measures for that year. In addition, Booz Allen Hamilton reported that the basic security fees for fiscal years 2006 and 2007 were too low to recover FPS’s costs of providing security and stated that they should have been about 25 cents more per square foot, or about 60 cents per square foot. In fiscal year 2008, the basic security fee increased to 62 cents per square foot, and FPS is projecting that the fee will be sufficient to cover its operational costs and that it will not have to implement any cost-saving measures, although a FPS official noted that the loss of staff in recent years has helped decrease its operational costs. In fiscal year 2009, FPS’s basic security fees will increase to 66 cents per square foot, which represents the fourth time FPS has increased the basic security fee since transferring to DHS. According to FPS, its cost-saving measures have also had adverse implications, including low morale among staff, increased attrition and the loss of institutional knowledge, as well as difficulties in recruiting new staff. In addition, several FPS police officers and inspectors said that overwhelming workloads, uncertainty surrounding their job security, and a lack of equipment have diminished morale within the agency. These working conditions could affect the performance and safety of FPS personnel. FPS officials said the agency has lost many of its most experienced law enforcement staff in recent years, and several police officers and inspectors said they were actively looking for new jobs outside FPS. For example, FPS reports that 73 inspectors, police officers, and physical security specialists left the agency in fiscal year 2006, representing about 65 percent of the total attrition in the agency for that year. Attrition rates have steadily increased from fiscal years 2004 to 2007, as shown in figure 6. The attrition rate for the inspector position has increased, despite FPS’s plan to move to an inspector-based workforce. FPS officials said its cost-saving measures have helped the agency address projected revenue shortfalls and have been eliminated in fiscal year 2008. In addition, according to FPS, these measures will not be necessary in fiscal year 2009 because the basic security fee was increased and staffing has decreased. FPS’s primary means of funding its operations is the fee it charges tenant agencies for basic security services, as shown in figure 7. However, this fee does not fully account for the risk faced by particular buildings or the level of basic security services provided, and does not reflect the actual cost of providing services. Some of the basic security services covered by this fee include law enforcement activities at GSA facilities, preliminary investigations, the capture and detention of suspects, and BSAs, among other services. In fiscal year 2008, FPS charged 62 cents per square foot for basic security and has been authorized to increase the rate to 66 cents per square foot in fiscal year 2009. FPS charges federal agencies the same basic security fee regardless of the perceived threat to that particular building or agency. Although FPS categorizes buildings according to security levels based on its assessment of the building’s risk and size, it does not affect the security fee charged by FPS. For example, level I facilities typically face less risk because they are generally small storefront-type operations with a low level of public contact, such as a small post office or Social Security Administration office. However, these facilities are charged the same basic security fee of 62 cents per square foot as a level IV facility that has a high volume of public contact and may contain high-risk law enforcement and intelligence agencies and highly sensitive government records. In addition, FPS’s basic security rate has raised questions about equity because federal agencies are required to pay the fee regardless of the level of service it provides or the cost of providing the service. For instance, in some of the regions we visited, FPS officials described situations where staff are stationed hundreds of miles from buildings under its responsibility, with many of these buildings rarely receiving services from FPS staff and relying mostly on local law enforcement agencies for law enforcement services. However, FPS charges these tenant agencies the same basic security fees as buildings in major metropolitan areas where numerous FPS police officers and inspectors are stationed and are available to provide security services. Consequently, FPS’s cost of providing services is not reflected in its basic security charges. For instance, a June 2006 FPS workload study estimating the amount of time spent on various security services showed differences in the amount of resources dedicated to buildings at various security levels. The study said that FPS staff spend approximately six times more hours providing security services to higher-risk buildings (levels III and IV buildings) compared to lower-risk buildings (levels I and II buildings). In addition, a 2007 Booz Allen Hamilton report of FPS’s operational costs found that FPS does not link the actual cost of providing basic security services with the security fees it charges tenant agencies. The report recommends incorporating a security fee that takes into account the complexity or the level of effort of the service being performed for the higher-level security facilities. The report states that FPS’s failure to consider the costs of protecting buildings at varying risk levels could result in some tenants being overcharged. We also have reported that basing government fees on the cost of providing a service promotes equity, especially when the cost of providing the service differs significantly among different users, as is the case with FPS. Changes in FPS’s security fees have also had adverse implications for agencies, specifically the frequent late notifications about rate increases. In the last 3 years, FPS has increased its rates but has not notified agencies of these increases until late in the federal budget cycle. According to an official from ICE’s Office of the Chief Financial Officer, FPS is required to announce its basic security fee at the same time GSA announces its rent charges, which are typically set by June 1 of the preceding year. However, since transferring to DHS, FPS has not complied with this schedule. For example, FPS announced that its administrative security fees for fiscal year 2007 would increase from 8 percent to 15 percent several months after tenant agencies received their annual appropriation for 2007. In March of fiscal year 2008, FPS announced that the basic security fee for fiscal year 2008 would increase from 57 cents per square foot to 62 cents per square foot and that the increase would be retroactive. Consequently, most tenant agencies have to fund these unexpected increases outside of the federal budget cycle. GSA officials said this would have a significant impact on many federal agencies, causing them to divert funds from operational budgets to account for this unexpected increase in cost. Several stakeholders have raised questions about whether FPS has an accurate understanding of the cost of providing security at GSA facilities. An official from ICE’s Office of the Chief Financial Officer said FPS has experienced difficulty in estimating its costs because of inaccurate cost data. In addition, OMB officials said they have asked FPS to develop a better cost-accounting system in past years. The 2007 Booz Allen Hamilton report found that FPS does not have a methodology to assign costs to its different security activities and that it should begin capturing the cost of providing various security services to better plan, manage, and budget its resources. We have also previously cited problems with ICE’s and FPS’s financial system, including problems associated with tracking expenditures. We also have previously reported on the importance of having accurate cost information for budgetary purposes and to set fees and prices for services. We have found that without accurate cost information, it is difficult for agencies to determine if fees need to be increased or decreased, accurately measure performance, and improve efficiency. Also, federal accounting standards and the Chief Financial Officers Act of 1990 require agencies to maintain accurate cost data and set standards for federal agencies that include requirements for determining and reporting on the full costs of government services. To determine how well it is accomplishing its mission to protect GSA facilities, FPS has identified some output measures that are a part of OMB’s Performance Assessment Rating Tool. These measures include determining whether security countermeasures have been deployed and are fully operational, the amount of time it takes to respond to an incident, and the percentage of BSAs completed on time. Some of these measures are also included in FPS’s federal facilities security index, which is used to assess its performance. However, FPS has not developed outcome measures to evaluate the net effect of its efforts to protect GSA facilities. While output measures are helpful, outcome measures are also important because they can provide FPS with broader information on program results, such as the extent to which its decision to move to an inspector- based workforce will enhance security at GSA facilities or help identify the security gaps that remain at GSA facilities and determine what action may be needed to address them. The Government Performance and Results Act requires federal agencies to, among other things, measure agency performance in achieving outcome-oriented goals. Measuring performance allows organizations to track the progress they are making toward their goals and gives managers critical information on which to base decisions for improving their progress. In addition, we and other federal agencies have maintained that adequate and reliable performance measures are a necessary component of effective management. We have also found that performance measures should provide agency managers with timely, action-oriented information in a format conducive to helping them make decisions that improve program performance, including decisions to adjust policies and priorities. However, FPS does not appear to be using these key management practices to manage its security program. FPS is also limited in its ability to assess the effectiveness of its efforts to protect GSA facilities, in part because it does not have a data management system that will allow it to provide complete and accurate information on its security program. Without a reliable data management system, it is difficult for FPS and others to determine the effectiveness of its efforts to protect GSA facilities or for FPS to accurately track and monitor incident response time, effectiveness of security countermeasures, and whether BSAs are completed on time. Currently, FPS primarily uses Web Records Management System (WebRMS) and Security Tracking System (STS) to track and monitor output measures. These output measures include FPS’s efficiency in response to calls for law enforcement assistance; determining whether appropriate security countermeasures have been recommended, deployed, and are fully operational; and to what extent BSAs are completed on time. However, FPS acknowledged that there are weaknesses with these systems that make it difficult to accurately track and monitor these output measures. For example, according to FPS, STS, which is used to track security countermeasures, does not allow for tracking the implementation status of recommended countermeasures, such as security cameras, bollards, or X-ray machines. Without this ability, FPS has difficulty determining whether it has mitigated the risk of GSA facilities to crime or a terrorist attack. In addition, according to many FPS officials at the seven regions we visited, the data maintained in WebRMS may not be a reliable and accurate indicator of crimes and other incidents for several reasons. First, because FPS does not write an incident report for every incident, not all incidents are entered in WebRMS. Second, according to FPS, there are many incidents phoned into the MegaCenter that would not show up in WebRMS because FPS police officers or inspectors did not complete the report. Third, the types and definitions of prohibited items vary not only region by region, but also building by building. For example, a can of pepper spray may be prohibited in one building, but allowed in another building in the same region. Standard guidelines and definitions would minimize the amount of subjectivity that police officers and inspectors may apply when deciding how and what types of information to enter in WebRMS. Finally, according to FPS, having fewer police officers has decreased the total number of crime and incident reports entered in WebRMS because there is less time spent on law enforcement activities. The officials in one FPS region we visited stated that 2 years ago there were 25,000 reports filed through WebRMS. However, this year they are projecting about 10,000 reports because there are fewer FPS police officers to respond to an incident and write a report if necessary. FPS officials also stated that inspectors and police officers often have to enter the same information into multiple data systems, leading to a greater risk for human error, such as transposing numbers, affecting the accuracy of the data. Our past work has shown that when data management systems are not integrated and compatible, excessive use of resources and inconsistent analysis of program results can occur. FPS has recognized the need to improve its current data management systems. As mentioned earlier, FPS is developing RAMP and expects it to be fully operational in 2011. According to FPS, the development and implementation of RAMP will provide it with an integrated system and a set of standard guidelines to use when collecting information, such as the types and definition of incidents and incident response times. FPS is also planning to procure a computer-assisted dispatch system for use at its MegaCenters, which will help to improve its ability to accurately track, analyze, and report crime and other incidents. Providing law enforcement and physical security support services to GSA facilities requires effective management of available staffing and funding resources. Since FPS transferred to DHS, its understanding of its staffing needs has changed frequently, and it is unclear whether the agency has an accurate estimate of the number of employees needed to achieve its mission. While FPS has taken some actions to address the operational and funding challenges it faces, many of these actions may not fully resolve these challenges. For example, FPS currently is in the process of changing to an all-inspector workforce and adding 150 inspectors to its workforce. The additional inspectors could have a positive impact on the eight regions where they will be assigned. However, it may not enhance FPS’s ability to provide law enforcement services such as proactive patrol and 24-hour response to GSA facilities. In addition, it is unclear whether FPS’s inspector-based workforce and the additional 150 inspectors will improve its oversight of contract guards or the quality and timeliness of BSAs. FPS could also benefit from better aligning its staffing resources with the agency’s goals and performance. This alignment could enable FPS to identify gaps in security protection at GSA facilities and assign employees to the highest-risk areas. It is also important that FPS ensure that its decision to move to an inspector-based workforce does not hamper its ability to protect GSA facilities. For example, FPS believes that it can rely on the informal relationships that exist between it and local law enforcement agencies for assistance with responding to incidents at GSA facilities. However, local law enforcement agencies and FPS regional officials believe that there are jurisdictional issues that need to be clarified. Moreover, FPS’s primary means of funding its operations—the basic security fee—does not account for the level of risk faced by buildings and, depending on that risk, the level of service provided or the cost of providing security at GSA facilities. This issue raises questions about whether some federal agencies are being overcharged by FPS. FPS also does not have a detailed understanding of its operational costs, including accurate information about the cost of providing its security services at GSA facilities with different risk levels. Without this type of information, FPS has difficulty justifying the rate of the basic security fee to its customers. We have found that by having accurate cost information, an organization can demonstrate its cost-effectiveness and productivity to stakeholders, link levels of performance with budget expenditures, provide baseline and trend data for stakeholders to compare performance, and provide a basis for focusing an organization’s efforts and resources to improve its performance. In addition, FPS has generally funded its operations by using a fee-based system. However, historically and recently FPS’s collections have not been sufficient to cover its projected operational costs and the steps it has taken to address the projected shortfalls have reduced staff morale and diminished security at GSA facilities. Thus, we believe it is important that FPS assess whether the fee- based system or an alternative funding mechanism is most appropriate for funding the agency. Given the operational and funding challenges FPS faces, it is important that the agency develop and implement business and performance management practices that will ensure that it is providing services efficiently, effectively, and in accordance with risk-based management. Having specific guidance and standards for measuring its efforts to protect GSA facilities from the risk of terrorist attacks, crime, or related incidents will also be beneficial to FPS. We have found that numerous federal and private sector organizations use security-related performance measures to help improve security, make decisions about risk management and resource allocation, and evaluate program effectiveness. Performance measurements can also be used to prioritize security needs and justify investment decisions so that an agency can maximize limited resources. While the output measures FPS uses are helpful, outcome measures are also important because they can provide FPS with broader information on program results, such as the extent to which its decision to move to an inspector-based workforce will enhance security at GSA facilities or help identify the security gaps that remain at GSA facilities and what action may be needed to address them. Finally, a reliable data management system will allow FPS to provide complete and accurate information on its security program. In past reports, we have discussed the importance of maintaining timely and accurate data to help monitor and improve the effectiveness of government programs. We have found that in order to make informed decisions and ensure accountability, agencies need data management systems that can generate timely, accurate, and useful information. Without a reliable data management system, it is difficult for FPS and other stakeholders to determine the effectiveness of its efforts to protect GSA facilities or for FPS to accurately track and monitor incident response time, effectiveness of security countermeasures, and whether BSAs are completed on time. To improve its ability to address its operational and funding challenges and to ensure that it has useful performance measures and reliable information to assess the effectiveness of efforts to protect GSA facilities, we recommend that the Secretary of Homeland Security direct the Director of FPS to take the following six actions: develop and implement a strategic approach to manage its staffing resources that, among other things, determines the optimum number of employees needed to accomplish its facility protection mission and allocate these resources based on risk management principles and the agency’s goals and performance measures; clarify roles and responsibilities of local law enforcement agencies in regard to responding to incidents at GSA facilities; improve FPS’s use of the fee-based system by developing a method to accurately account for the cost of providing security services to tenant agencies and ensuring that its fee structure takes into consideration the varying levels of risk and service provided at GSA facilities; evaluate whether FPS’s current use of a fee-based system or an alternative funding mechanism is the most appropriate manner to fund the agency; develop and implement specific guidelines and standards for measuring its performance, including outcome measures to assess its performance and improve the accountability of FPS; and improve how FPS categorizes, collects, and analyzes data to help it better manage and understand the results of its efforts to protect GSA facilities. We provided a draft of this report to DHS for its review and comment. DHS concurred with the report’s findings and recommendations. DHS’s comments can be found in appendix II. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Homeland Security, DHS’s Assistant Secretary for Immigration and Customs Enforcement, and appropriate congressional committees. We also will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To determine any operational challenges the Federal Protective Service (FPS) faces in protecting General Services Administration (GSA) facilities and actions it has taken to address those challenges, we interviewed about 170 FPS police officers, inspectors, special agents, support personnel, and administrators at headquarters and at 7 of FPS’s 11 regions. These 7 regions represent about 59 percent of FPS’s facility protection portfolio. We also interviewed about 53 GSA headquarters and regional management and security officials, 101 tenant agency officials from 22 building security committees, and 8 local law enforcement agencies about FPS’s efforts to protect federal employees, facilities, and the public. These 7 regions were selected using both quantitative and qualitative criteria in order to maximize diversity among the site visits. In selecting regions to visit, we considered the number of buildings in each region, geographic dispersion across the United States, the number of FPS personnel in each region, and input from FPS and GSA officials. We analyzed FPS staffing data from fiscal years 2004 through 2007 to identify trends in staffing. To validate staffing data received from FPS, we compared the data to staffing numbers from the Office of Personnel Management and found them to be accurate. We also analyzed laws relating to jurisdictional issues at GSA facilities and FPS’s authority. We analyzed a FPS workforce study, Immigration and Customs Enforcement (ICE) Office of Professional Responsibility performance reports, and the FPS policy handbook. To determine any budgetary and funding challenges FPS faces and actions it has taken to address them, we interviewed budget officials from the Office of Management and Budget, ICE’s Office of the Chief Financial Officer, FPS, and GSA. We analyzed appropriation acts, and FPS’s budget and budget justifications for fiscal years 2004 through 2009. In addition, we analyzed FPS’s 2006 workforce study and a 2007 Booz Allen Hamilton activity-based costing study. To determine how FPS measures the effectiveness of its efforts, we analyzed FPS’s strategic plan for fiscal years 2008 through 2011 and strategic planning and performance reports and interviewed officials from FPS’s Risk Management Division. We also analyzed crime and incident data from FPS’s Web Records Management System and interviewed FPS officials about the reliability of the data. Because of inconsistency in reporting among regions and problems with ensuring that all incidents are included in the data, we determined that this data do not reliably capture all crime and incidents in federal buildings. Because of the sensitivity of some of the information in this report, we cannot provide information about the specific locations of crime or other incidents discussed. We conducted this performance audit from April 2007 to June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Tammy Conquest, Assistant Director; Daniel Cain; Collin Fallon; Brandon Haller, Aaron Johnson; Katie Hamer; Carol Henn; Daniel Hoy; and Susan Michal-Smith made key contributions to this report.
In 2003, the Federal Protective Service (FPS) transferred from the General Services Administration (GSA) to the Department of Homeland Security (DHS). FPS provides physical security and law enforcement services to about 9,000 GSA facilities. To accomplish its mission of protecting GSA facilities, FPS currently has an annual budget of about $1 billion, 1,100 employees, and 15,000 contract guards located throughout the country. Recently, FPS has faced several challenges protecting GSA facilities and federal employees. This report provides information and analysis on (1) FPS's operational challenges and actions it has taken to address them, (2) funding challenges FPS faces and actions it has taken to address them, and (3) how FPS measures the effectiveness of its efforts to protect GSA facilities. To address these objectives, we conducted site visits at 7 of FPS's 11 regions and interviewed FPS, GSA, tenant agencies, and local law enforcement officials. FPS faces several operational challenges that hamper its ability to accomplish its mission, and the actions it has taken may not fully resolve these challenges. FPS's staff decreased by about 20 percent between fiscal years 2004 and 2007. FPS has managed the decreases in its staffing resources in a manner that has diminished security at GSA facilities and increased the risk of crime or terrorist attacks at many GSA facilities. For example, with the exception of a few locations, FPS no longer provides proactive patrols at GSA facilities to detect and prevent criminal incidents and terrorism-related activities. FPS also continues to face problems with managing its contract guard program and ensuring that security countermeasures, such as security cameras and magnetometers, are operational. For example, according to FPS, it has investigated significant crimes at multiple high-risk facilities, but the security cameras installed in those buildings were not working properly, preventing FPS investigators from identifying the suspects. To address some of its operational challenges, FPS is moving to an inspector-based workforce, which seeks to eliminate the police officer position and rely primarily on FPS inspectors for both law enforcement and physical security activities. FPS believes that this change will ensure that its staff has the right mix of technical skills and training needed to accomplish its mission. FPS is also hiring an additional 150 inspectors and developing a new system for completing building security assessments. However, these actions may not fully resolve FPS's operational challenges because, for example, inspectors might not be able to fulfill both law enforcement and physical security roles simultaneously. FPS also faces funding challenges, and the actions it has taken to address them have had some adverse implications. To fund its operations, FPS charges each tenant agency fees for its security services. In fiscal years 2005 and 2006, FPS's projected expenses exceeded its collections and DHS had to transfer funds to make up the difference. FPS also instituted cost-saving measures such as restricting hiring and travel and limiting training and overtime. According to FPS, these measures have affected staff morale, safety and increased attrition. FPS has been authorized to increase the basic security fee four times since it transferred to DHS, currently charging tenant agencies 62 cents per square foot for basic security services. Because of these actions, FPS's collections in fiscal year 2007 were sufficient to cover costs, and FPS projects that collections will also cover costs in fiscal year 2008. However, FPS's primary means of funding its operations--the basic security fee--does not account for the risk faced by specific buildings, the level of service provided, or the cost of providing services, raising questions about equity. Several stakeholders also expressed concern about whether FPS has an accurate understanding of its costs to provide security at federal facilities. FPS has developed output measures, but lacks outcome measures to assess the effectiveness of its efforts to protect federal facilities. Its output measures include determining whether security countermeasures have been deployed and are fully operational. However, FPS has not developed outcome measures to evaluate its efforts to protect federal facilities that could provide FPS with broader information on program results. FPS also lacks a reliable data management system for accurately tracking performance measures.
The first joint report on contracting in Iraq and Afghanistan required under amendments from the NDAA for FY2011 was to be issued by February 1, 2011, with subsequent reports due in 2012 and 2013. In the reports, DOD, State, and USAID are to provide the following for each 12-month reporting period: total number and value of contracts and assistance instruments awarded, total number and value of active contracts and assistance instruments, the extent to which such contracts and assistance instruments used competitive procedures, total number of contractor and assistance personnel at the end of each quarter of the reporting period, total number of contractor and assistance personnel performing security functions at the end of each quarter of the reporting period, and total number of contractor and assistance personnel killed or wounded. The joint reports are also to include the sources of information and data used to compile the required information; a description of any known limitations of the data reported, including known limitations of the methodology and data sources used; and plans for strengthening collection, coordination, and sharing of information on contracts and assistance instruments in Iraq and Afghanistan through improvements to common databases. The first joint report submitted by the agencies in May 2011 provides an overview of the reporting requirements, an introduction, and a section for each agency to present its data. Each agency was responsible for collecting its fiscal year 2010 data from relevant sources and compiling its section of the report. The reporting requirements in the NDAA for FY2011 build upon prior national defense authorization act requirements. Specifically, Section 861 of the NDAA for FY2008 directed the Secretaries of Defense and State and the USAID Administrator to sign an MOU related to contracting in Iraq and Afghanistan. The law, as amended by the NDAA for FY2010, specified a number of issues to be covered in the MOU. These include specifying each agency’s roles and responsibilities in matters related to contracting in the two countries, determining responsibility for establishing procedures for and coordination of movement of contractor personnel in the two countries, and identifying common databases to serve as information repositories on contracts and assistance instruments with more than 30 days of performance in Iraq or Afghanistan and the personnel working in either country under those contracts and assistance instruments. The common databases are to include a brief description of each contract and assistance instrument, its total value, and whether it was awarded competitively; for personnel working under contracts or assistance instruments, the databases will include the total number employed, total number performing security functions, and total number killed or wounded. Tracking this information should provide much of the information the agencies are to include in the joint reports. In July 2008, DOD, State, and USAID agreed in an MOU that SPOT would serve as their common database and be the system of record for the statutorily required contract and personnel information. The agencies revised their MOU in April 2010, making SPOT their system for also tracking assistance instruments and associated personnel. SPOT is a web-based system initially developed by the U.S. Army to track detailed information on a limited number of contractor personnel deployed with U.S. forces. The 2010 MOU specified that SPOT would include information on DOD, State, and USAID contracts and assistance instruments with more than 30 days of performance in Iraq or Afghanistan or valued at more than $100,000, as well as information on the personnel working under those contracts and assistance instruments. SPOT is configured so that it can track individuals by name and unique identifier, such as Social Security number, and record information, including the contracts they are working under, deployment dates, and next of kin. The agencies agreed that contract-related information, such as value and extent of competition, are to be imported into SPOT from FPDS-NG, the federal government’s system for tracking information on contracting actions. According to the MOU, DOD is responsible for all basic maintenance, upgrades, training, and systems operations costs, but the agencies agreed to negotiate funding arrangements for any agency- unique requirements. Within DOD, a program management office has responsibility for the development, integration, testing, training, and deployment of SPOT and as such, oversees the contractor that operates, maintains, and sustains the system. DOD, State, and USAID have phased in SPOT’s implementation, with each developing its own policies and procedures governing the system’s use.  DOD designated SPOT in January 2007 as its primary system for collecting data on contractor personnel deployed with U.S. forces. At that time, it implemented a contract clause directing firms to enter data into SPOT on U.S., third country, and local nationals working under its contracts in Iraq or Afghanistan that meet reporting thresholds.  State issued a policy in March 2008 that included language to be incorporated in applicable contracts requiring contractors to enter data into SPOT on U.S., third country, and local nationals working in either Iraq or Afghanistan. State expanded this requirement in January 2009 to cover personnel working under certain assistance instruments in the two countries. As amended, State’s assistance policy directed that U.S. and third country nationals working under grants must be entered into SPOT but allowed for discretion in determining whether local nationals were entered given safety and security concerns. In January 2011, State revised its assistance guidance and related provision to allow grantees with locally hired Iraqi or Afghan personnel to report aggregate numbers of local nationals without providing personally identifying information when safety concerns exist.  USAID issued a directive in April 2009 that required the use of contract clauses and assistance provisions requiring contractors and assistance recipients in Iraq to enter personnel data into SPOT. The directive explicitly excluded Iraqi entities and nationals from having to be entered into SPOT until a classified system is in place. In July 2010, USAID issued a directive establishing a similar requirement for Afghanistan. However, the policy notes that procedures will be provided separately for entering information on Afghan nationals, but to date, such procedures have not been issued. DOD, State, and USAID’s joint report cited a number of limitations associated with SPOT’s implementation, and as a result, the agencies relied on a variety of other data sources to develop the report. The only exception was State’s use of SPOT as the basis for its contractor personnel numbers. Whereas GAO previously collected and compiled data from numerous sources including manually compiled lists of contracts and assistance instruments and personnel data obtained through surveys, officials from the three agencies told us they decided to rely on existing databases and sources to the greatest extent possible. Table 1 summarizes the data sources used to prepare the joint report and the reasons cited by the agencies for not using SPOT. The data presented in the agencies’ joint report had significant limitations, many of which were not fully disclosed. As a result, the data should not be used to draw conclusions about contracts, assistance instruments, and associated personnel in Iraq or Afghanistan for fiscal year 2010 or to identify trends over time. While the agencies collectively reported $22.7 billion in fiscal year 2010 obligations, the joint report understates the three agencies’ obligations on contracts and assistance instruments with work performed in Iraq and Afghanistan by at least $4 billion, nearly all for DOD contracts. We identified this minimum amount by comparing the underlying data the agencies used to prepare the joint report with data we obtained from the agencies during our prior review of contracts and assistance instruments with work in either country during the first half of fiscal year 2010. The level of underreporting we identified does not fully account for new awards or obligations that the agencies made in the second half of fiscal year 2010. DOD and State underreported their contracts and obligations in the joint report because they relied solely on FPDS-NG to identify contracts with work performed in Iraq or Afghanistan. FPDS-NG allows agencies to only report one principal place of contract performance. However, contracts can have performance in multiple countries, and the reporting requirement applies to contracts with performance in Iraq or Afghanistan, even if neither country is the principal place of performance. Further, not all DOD contracts with performance in Iraq and Afghanistan were entered into FPDS-NG. Neither DOD nor State disclosed any limitations with their FPDS-NG queries or that there could be additional contracts with associated obligations with work in the two countries. Using FPDS-NG to identify contracts with a principal place of performance in Iraq and Afghanistan, DOD reported $18.4 billion in fiscal year 2010 obligations but underreported its contract obligations by at least $3.9 billion. Specifically, we identified an additional 20,810 contracts and orders that totaled to about $3.5 billion in fiscal year 2010 obligations that DOD had reported to us last year but were not included in the joint report because the principal place of performance was not Iraq or Afghanistan. For example, DOD previously reported to us two contracts for translation and interpretation services with performance in Iraq and/or Afghanistan with $1.5 billion in fiscal year 2010 obligations, but these contracts were not included in the joint report because FPDS-NG identified the principal place of performance as the United States. We also identified additional contracts that were previously reported to us but not included in the joint report because they were not in FPDS-NG. Among those, we identified 13 contracts with $418 million in obligations during the first half of fiscal year 2010, including combat support contracts for information technology services and linguist support in the two countries. DOD did not report any assistance instruments with performance in Iraq or Afghanistan. This is consistent with our 2010 report for which we found DOD had no assistance instruments with performance in either country during fiscal year 2009 or the first half of fiscal year 2010. For the joint report, State relied on FPDS-NG and reported $1.8 billion in contract obligations in Iraq and Afghanistan for fiscal year 2010. We found, however, that State underreported its fiscal year 2010 contract obligations by at least $62 million by not including 49 contracts and orders that were reported to us last year. Specifically, we identified a State delivery order for facility management with about $54.3 million in obligations in fiscal year 2010 that was not in the joint report because the United States was identified as the principal place of performance in FPDS-NG, as opposed to either Iraq or Afghanistan. We also identified another 48 contracts and orders that State reported to us last year as having performance in either country that were not identified through State’s FPDS-NG query. These include 23 contracts and orders awarded by the embassies in Iraq and Afghanistan with about $1 million in obligations in the first half of fiscal year 2010, even though the joint report states that it includes all procurement activities contracted for by State’s missions in the two countries. While the reporting requirement applies to both contracts and assistance instruments, State did not report any assistance instruments with performance in Iraq or Afghanistan or provide any explanation in the joint report as to why such information was not included. Based on data provided by State last year, we identified 155 assistance instruments with work performed in Iraq and/or Afghanistan with $120 million obligated during the first half of fiscal year 2010. These assistance instruments covered a wide range of activities, such as media workshops, small business development, and capacity building for nongovernmental organizations. State officials informed us that they did not include information on assistance instruments as they were not including information on personnel working under assistance instruments because of limitations, as discussed below. They told us, however, that they plan to include assistance instrument information in next year’s joint report. Unlike DOD and State, USAID did not rely on FPDS-NG as its data source for the number and value of contracts. As explained in the joint report, USAID knew gaps existed in its FPDS-NG data, particularly for Afghanistan, so it used data from its financial management system, which contains information on the number and value of both contracts and assistance instruments. USAID reported $2.6 billion in contract and assistance instrument obligations in Iraq and Afghanistan for fiscal year 2010. However, by comparing the data from the financial management system to data USAID provided us last year, we found that the agency underreported its obligations by about $3.9 million. These obligations were for 16 contracts and 8 assistance instruments in the first half of fiscal year 2010 that were not included in the joint report. Almost all of the contracts that were not reported were personal services contracts. USAID officials told us they did not report personal services contracts because they consider such contractor personnel to be USAID employees, but this was not disclosed in the joint report. Further, unlike DOD and State, which provided competition information for nearly all contracts included in the joint report, USAID provided competition data on fewer than half the active contracts and assistance instruments included in the joint report. Other than acknowledging FPDS-NG data gaps, USAID provided no specific explanation for why the competition data presented in the report are incomplete. We identified a number of limitations and methodological challenges that resulted in both over- and underreporting of contractor and assistance personnel and call into question the overall reliability of the data in the joint report. However, we were not able to determine the full magnitude of the discrepancies. For the joint report, DOD relied on quarterly censuses as its source of data on contractor personnel, including personnel performing security functions. DOD provided the numbers of contractor personnel, broken out by nationality, in Iraq and Afghanistan at the end of each quarter. However, the numbers for local nationals working under contracts in Afghanistan were generally overreported. According to the U.S. Central Command (CENTCOM) official who oversees the compilation of the census, a methodological error resulted in double counting of local nationals in Afghanistan for the first three fiscal year 2010 quarters. The error was discovered as the fourth quarter census was being compiled, which resulted in a significant reduction in the number of local national contractor personnel in Afghanistan for that quarter. To illustrate the magnitude of the double counting, DOD reported 73,392 local national contractor personnel in Afghanistan for the third quarter of fiscal year 2010 and only 34,222 in the fourth quarter—a difference of 39,170 personnel. No adjustments were made to the prior three quarters to correct for the double counting. Furthermore, this error and an explanation as to what occurred are not provided in the joint report, except to note that there are challenges associated with counting local national personnel in Afghanistan. Officials from the Office of the Deputy Assistant Secretary of Defense for Program Support and CENTCOM told us they have a high level of confidence in the census numbers for all contractor personnel except local nationals in Afghanistan. However, as we noted in October 2010, DOD officials overseeing the census characterized the census as providing rough approximations of the actual numbers of contractor personnel in either country. They explained that several challenges pertaining to counting local nationals and validating contractor-reported data have hindered their ability to collect accurate and reliable personnel data. State relied on SPOT as its source for data on contractor personnel, which led to several omissions and discrepancies. Based on our analysis of State’s reported personnel data and the contract data reported from FPDS-NG, we identified 50 contracts that met SPOT reporting requirements but were not in the system. Therefore, personnel working on those contracts in Iraq and Afghanistan were not included in the joint report. For example, we identified 5 contracts for construction with about $525 million in fiscal year 2010 obligations with no contractor personnel reported in SPOT. Further, at the end of the second quarter of fiscal year 2010, there were 1,336 fewer contractor personnel in SPOT than were reported to us last year from State’s surveys of contractor personnel in the two countries. Such omissions are consistent with what State officials told us in 2010—that manually compiled surveys of contractor personnel in either country have some limitations but provide more accurate information than SPOT. Additionally, while the joint report presents the numbers as “contractor personnel,” and we confirmed with State officials that the numbers were only to include contractor personnel, we found that about 13 percent of the personnel State reported as contractor personnel were actually working under assistance instruments. In addition, State did not include in the joint report the number of personnel working under assistance instruments in Iraq and Afghanistan or explain why assistance personnel were not included. State officials informed us that although State’s policy required assistance personnel to be entered into SPOT since January 2009, assistance recipients had been reluctant to enter information into the system. As a result, for fiscal year 2010, officials told us that little information regarding personnel working under assistance instruments had been entered into the system. However, State could have relied on other data sources to provide the required personnel information. Last year, based on surveys State conducted of its assistance recipients, we reported that there were at least 8,074 personnel working under State’s assistance instruments in Iraq and Afghanistan at the end of the second quarter of fiscal year 2010. We cautioned that the number was likely understated because of several factors. State officials informed us that response rates to their requests for personnel numbers from assistance instrument recipients were low; they also stated that local nationals were not always captured in personnel counts because it was not feasible or it was too difficult to obtain accurate information. In reporting the number of personnel performing security functions, State relied exclusively on SPOT and did not disclose any limitations with that source. As we reported last year, SPOT cannot be used to reliably distinguish personnel performing security functions from other contractor personnel, as each of the three available methods has limitations. State officials responsible for compiling the joint report told us they queried SPOT based on security-related job titles. Upon review of the data, officials from the Bureau of Diplomatic Security noticed that the numbers appeared low. An analyst from the Bureau of Diplomatic Security identified five large security contracts with numerous personnel who did not have the word “security” in their job titles and as a result were not included in the query results, a risk we noted in our prior report. The SPOT query indicated that there were 3,924 State contractor personnel performing security functions in Iraq and Afghanistan at the end of the fourth quarter of fiscal year 2010. State revised this number and reported 8,034 personnel performing security functions for that quarter. Despite the fact that the SPOT data were incomplete and had to be manually adjusted, the joint report provides no explanation and does not identify limitations with the SPOT data for determining the number of personnel providing security functions. In presenting personnel numbers in the joint report, USAID was the only agency that used estimates as opposed to actual counts for the total number of contractor and assistance personnel, as allowed by the reporting requirement. USAID also used estimates for the number of personnel performing security functions, which is not provided for in the reporting requirement. Specifically, USAID estimated the number of personnel for Afghanistan. However, the full extent to which estimates were used is not disclosed in the joint report. Further, the estimates are based on unreliable data. USAID officials explained to us that the estimates were based on data from several sources including databases used to track aid effectiveness metrics, quarterly reports submitted by its contractors and grantees, and data submitted to us for last year’s report. All of these sources have limitations. For example,  while contractors and assistance recipients in Iraq report their personnel numbers on a regular basis, a USAID official informed us that only about 70 percent of their contractors and assistance recipients in Afghanistan provide personnel information;  a USAID official told us they have a limited ability to verify the accuracy or completeness of the data that are reported, especially for Afghanistan where they operate far more projects than in Iraq; the USAID official responsible for preparing the joint report raised concerns about possible inconsistent reporting of security personnel that could result in double counting; and the data provided to us by USAID for our 2010 report did not include personnel working under several contracts and assistance instruments, such as four cooperative agreements for food security programs in Afghanistan. USAID officials also told us that the numbers in the joint report do not include the number of personnel working under certain support service contracts, such as facilities maintenance, or personal services contractors. For example, a USAID official told us that at least 109 contractor personnel supporting the Iraq mission were not counted in the joint report because a decision was made not to include support services and personal services contractors. Although all three agencies are required to track the number of personnel killed or wounded while working on contracts and assistance instruments in Iraq or Afghanistan, DOD still does not have a system that reliably tracks killed and wounded contractor personnel. For the joint report, DOD relied on data maintained by the Department of Labor (Labor) regarding Defense Base Act (DBA) claims. While DOD acknowledged in the joint report that claims data from this workers’ compensation program do not provide a true reflection of how many DOD contractor personnel were killed or wounded while working in either country, DOD did not fully disclose the limitations associated with DBA claims data. First, the claims data presented in the joint report are for death and injury claims filed in fiscal year 2010 for all U.S. government contractors and civilians— including those employed by State and USAID—and not just DOD contractors. Further, as we concluded in 2009, DBA claims data do not provide an appropriate basis for determining the number of contractor personnel killed or wounded in either country. Most notably, not all deaths and injuries for which claims are filed under DBA would be regarded as contractors killed or wounded within the context of the NDAA for FY2011 reporting requirement. For example, we previously identified DBA claims filed for occupational injuries and medical conditions such as sprains and appendicitis. Also, Labor officials previously explained to us that injuries to local and third country contractor personnel, in particular, may be underreported. To provide their data on personnel killed and wounded, State and USAID relied on data collected by State bureaus and USAID missions in Iraq and Afghanistan. These data were based on reports submitted to State by contractors and to USAID by contractors and assistance recipients. Without alternative sources of data, we could not verify whether State’s and USAID’s data were complete, except to note that State did not include assistance personnel who were killed or wounded. However, there are indications of underreporting by contractors and assistance recipients. For example, a May 2010 report from the USAID Inspector General indicated that not all contractors and assistance recipients in Afghanistan were reporting incidents that result in personnel being injured or killed. In addition, a USAID official in Afghanistan acknowledged that for fiscal year 2010, it was voluntary for contractors and assistance recipients to file serious incident reports, which would provide information on personnel killed or wounded. Earlier this year, USAID began modifying contracts in Afghanistan to require its contractors to file serious incident reports. Officials from the three agencies told us they have used SPOT in some instances to obtain information on individual contracts and contractor employees. For example, an official from State’s Bureau of Diplomatic Security said they have used SPOT during investigations to verify whether the individuals involved were deployed in theater at the time of the incidents being investigated. A USAID contracting officer in Iraq told us that when a security incident involving a contractor employee occurs, she uses SPOT to determine if the individual involved has a letter of authorization, which should provide personal information including whether the individual is authorized to carry a weapon. A senior official with DOD’s CENTCOM Contracting Command in Iraq explained that he used SPOT to obtain information on specific contracts, such as the name of the contracting officer or contracting officer’s representative, in response to questions about contracts that were not awarded or managed by his office. State and DOD officials have also reported using SPOT to better manage contractor personnel. For example, DOD officials from the SPOT program management office told us that SPOT has been used in conjunction with information from other systems to identify contractors that should be billed for the use of government services, including medical treatment and dining facilities. Additionally, State Diplomatic Security officials told us they have used SPOT to confirm that contractor personnel are authorized to be in Iraq and determine to what government services those personnel are entitled. DOD and State officials also identified instances of using SPOT data to inform operational planning for contractor support. Officials from the SPOT program management office told us they have received requests from U.S. Forces-Iraq commanders to identify the universe of contractors and contractor capabilities in Iraq to assist with the drawdown of U.S. forces. They also stated that base commanders in Iraq are receiving contractor population reports to obtain insight into which contractors are on their bases. Additionally, officials in the Office of the Deputy Assistant Secretary of Defense for Program Support told us that data from SPOT are being used to help prepare future operational plans. For example, SPOT data have been analyzed to help determine what services contractors have provided and what level of life support the U.S. government has provided to them, which can aid combatant commanders in developing operational plans. State officials also told us that the U.S. Embassy in Iraq has requested SPOT data to help it determine the number of contractors in country and to assist with planning for the future U.S. presence in Iraq once the U.S. military withdraws at the end of this year. However, USAID officials including those we spoke with in Iraq and Afghanistan told us that they do not use SPOT data to manage, oversee, or coordinate contracts aside from obtaining information on specific contractor employees. DOD, State, and USAID officials informed us that shortcomings in SPOT data and reporting capabilities limit their ability to use the system in managing, overseeing, and coordinating contracts with work performed in Iraq and Afghanistan. In some cases, officials have relied on other data sources for such purposes. For example, DOD officials with the Contracting Fusion Cell in Iraq told us that because SPOT is designed to track contractor personnel on an individual basis rather than to support the operational management of contractors, they developed a new, separate database containing aggregate-level data on contractor personnel at each base to help manage the drawdown of personnel and equipment from the country. While the new database includes information not available from SPOT, such as information on contractor equipment, some of the basic contract information overlaps with SPOT and was added to the database from sources other than SPOT. Similarly, officials from State’s Bureau of Diplomatic Security told us that SPOT does not provide the level of detail needed to manage their security contractor employees and that they rely on their own data system for the day-to-day management of their contractors. Officials from all three agencies also raised concerns about the reports that can be generated from SPOT. USAID officials in Iraq explained that one reason they do not rely on SPOT to help manage contractors and assistance recipients is that the types of reports they need are not easily available from the system. State officials also indicated that the standard reports available through SPOT do not meet their needs and they have to request ad hoc reports from the SPOT program management office’s help desk. CENTCOM Contracting Command officials in Iraq also told us that for a large data run they cannot obtain data from SPOT in a timely manner, with it taking up to a week to receive the data. SPOT program management officials acknowledged that agency personnel are not fully aware of SPOT’s reporting capabilities and may not have confidence in the system given its data reliability challenges. As a result, the program management officials are seeking to expand their outreach to potential users of the data, focusing on improving customer service, and exploring the development of training on how SPOT data could be used for management and operations, as opposed to the current training that has been focused on entering data into the system. Also, the SPOT program management office told us that they have taken steps to facilitate agency officials’ ability to query SPOT for contracts awarded by their agencies, a process they described as cumbersome, to allow for better coordination and leveraging of existing contracts within an agency. Staff from the Office of the Senior Contracting Official in Afghanistan told us that they recently began using this query functionality and they expect it to better enable their use of SPOT in responding to future data requests. The agencies’ ability to use SPOT for interagency coordination purposes has been limited by the fact that they cannot easily access each other’s data. SPOT program management officials told us that SPOT could be used by the agencies to identify and leverage contracts being performed for common services so that agencies could minimize duplication, share price information, and obtain cost savings. However, agency officials are currently not able to access information on other agencies’ contracts unless DOD grants them permission to have full access to the information in SPOT. SPOT program management officials informed us that they are developing a separate reporting and analysis functionality to allow users to more easily share, analyze, and use data available in SPOT. However, this functionality is currently being tested and there are no time frames for when it will be available to all users. While USAID officials agreed that coordination among the agencies is important, they did not share the perspective that the agencies needed access to each others’ information in SPOT. They explained that this is partly due to the fact that interagency coordination before the award of a contract or assistance instrument is occurring without using SPOT. We previously reported that a significant challenge associated with SPOT’s implementation was ensuring that Iraqi and Afghan nationals working under contracts and assistance instruments were consistently and accurately entered in SPOT. Last year we reported that local nationals were not always entered into the system because of agency policies as well as practical and technical limitations. For example, many local nationals work at remote locations, which limits agencies’ ability to track these personnel and verify the completeness of reported information. Also, DOD, State, and USAID officials have told us that some local national contractors refuse to submit information on their personnel because of safety concerns. Additionally, some information required for SPOT data fields, such as first and last names and dates of birth, may not be known due, in part, to cultural norms specific to each country. The agencies have taken some steps to improve the reliability of the personnel data in SPOT. DOD and State officials informed us that they have increased efforts to validate SPOT data. In DOD’s case, this is done, in part, through the SPOT-Plus process, which began in January 2010. This process is used to reconcile contractor personnel numbers in SPOT with the quarterly contractor census and identify information that needs to be updated or entered into SPOT. DOD officials informed us that they will continue comparing SPOT and census data until there is confidence that 85 percent of the personnel reported through the census are reported in SPOT, at which point the plan is to discontinue the census and fully rely on SPOT. According to DOD officials, their analyses indicate that for some categories of contractor personnel they may have achieved the 85 percent confidence level, but that for other categories—particularly local nationals in Afghanistan—they are still below that level. The officials could not provide an estimate as to when they will discontinue the census. However, they noted that once the 85 percent confidence level is achieved, DOD plans to conduct random samplings to ensure it is maintained. Similarly, State officials informed us that program and contracting officials have begun reviewing SPOT data on a quarterly or even monthly basis in an effort to improve SPOT data entry. Given this emphasis, State officials told us that they are increasingly confident in the reliability of personnel data in SPOT. However, a USAID official responsible for preparing the joint report told us that the agency does not validate SPOT data and does not intend to do so, noting it has experienced high staff turnover in Iraq and Afghanistan and has other reporting priorities. In April 2011, SPOT was modified to address concerns cited by State and USAID officials, as well as by contractors and assistance recipients, that the safety of local nationals could be at risk should SPOT, with its detailed personal information, be compromised. The system now allows users to enter the aggregate number of personnel working under a contract or assistance instrument, rather than requiring personnel to be entered individually with personally identifiable information. This provides a means of counting local nationals working under contracts and assistance instruments who previously were not entered into the system. USAID officials said that while guidance on the use of the aggregate count function has not yet been issued, they have begun entering aggregate data on local nationals in Afghanistan into SPOT. In January 2011, State revised its assistance policy to allow grantees with locally hired Iraqi or Afghan personnel to report their aggregate numbers of local nationals into SPOT. State officials told us the modification appears to have satisfied assistance recipients’ concerns, as they are now providing State officials with aggregate numbers for inclusion in SPOT. DOD officials informed us that they will not be issuing guidance regarding the aggregate count function, as DOD’s policy continues to require personnel working under contracts that meet reporting thresholds to be individually entered into SPOT. Additional measures have been undertaken to help address the challenge of tracking local nationals in SPOT. For example, the SPOT program management office developed procedures for establishing unique identification numbers for local nationals who are entered into the system by name but whose personal identifying information does not conform to the required SPOT data fields. Similarly, DOD officials told us they have developed work-arounds for Iraqi and Afghan firms that lack reliable Internet connections to submit their personnel information via templates, which are then uploaded by DOD personnel into SPOT. In an effort to improve the collection of data on personnel working at remote locations, DOD officials informed us that the department is also piloting a handheld device that does not require an Internet connection and can be used to collect information on personnel that is then uploaded into SPOT. In 2009, we recommended that the three agencies develop a joint plan with associated time frames to address SPOT’s limitations, but agencies responded that a plan was not needed as their ongoing coordination efforts were sufficient. However, we concluded last year and our work continues to demonstrate that coordination alone is not sufficient to ensure that statutory requirements are met. Specifically, SPOT still cannot be used to reliably track statutorily required contract, assistance instrument, and personnel data as agreed to in the agencies’ MOU because of a number of longstanding practical and technical limitations. SPOT program management officials and the agencies have identified plans for further modifications and new guidance needed to address some but not all of these limitations. SPOT still is not linked with FPDS-NG or other agency systems for obtaining information on contracts and assistance instruments. Consequently, SPOT cannot be used to obtain financial and competition information on contracts and assistance instruments as agreed to in the MOUs. According to the joint report, the link to FPDS-NG to obtain contract information is scheduled to occur in early fiscal year 2012—this functionality was previously planned to be available in 2010. As we reported in 2009, one reason for this delay is that contract numbers, which are the unique identifiers that would be used to match records in SPOT to those in FPDS-NG, are entered into SPOT using different formats. To help resolve this, the SPOT program management office modified SPOT earlier this year to require DOD users to enter contract numbers in a standardized manner that can be matched with FPDS-NG information. SPOT program management officials told us that a similar modification has not been made for State or USAID contracts. Once the link is made between SPOT and FPDS-NG, information from the two systems can only be merged if the contract number has been entered into SPOT. If the contract is not in SPOT, because, for example, no contractor personnel working on that particular contract have been entered, its information cannot be linked with the information in FPDS-NG. Conversely, current information on the contract has to be in FPDS-NG, which does not always occur as we found in our analyses of the information presented in the joint report. Most notably, officials told us that information on USAID contracts awarded in Afghanistan must still be manually entered into FPDS-NG, which has resulted in known information gaps. USAID is planning to deploy a new system to Afghanistan—already in place in Iraq and other countries—that will automatically upload contract information into FPDS-NG by the end of 2011. Once the link between SPOT and FPDS-NG is established and the necessary data are in both systems, then SPOT could be relied on to provide more complete information on contracts with performance in either country, as opposed to relying only on the FPDS-NG principal place of performance. SPOT program management officials informed us that there are currently no plans to establish links with the State or USAID systems that contain assistance instrument information. Officials stated that, therefore, information on those instruments needs to be manually entered into SPOT. SPOT does not provide a reliable means of obtaining information on orders and subawards. The statutory requirement to track information on contracts and assistance instruments includes a requirement to track comparable information on task and delivery orders as well as subcontracts and subgrants. However, SPOT does not have a specific data field for this information. Instead, contractors and assistance recipients are instructed by the agencies to enter information on their subawards into a data field designed to track information on task orders. As a result, it has not been possible to obtain accurate counts of orders and subawards using SPOT. SPOT program management officials told us that they expect to address this issue by creating a new subaward data field in a September 2011 SPOT upgrade. SPOT does not reliably distinguish personnel performing security functions. As discussed in our 2010 report, there are three methods to distinguish personnel performing security functions from others in SPOT. Each method has limitations and yields different results, none of which are fully consistent with the statutory definition of contractor personnel performing security functions. SPOT program officials acknowledge this limitation but informed us that they have not yet developed a corrective action to ensure that security personnel are consistently and reliably distinguished for statutory tracking and reporting purposes. SPOT is not being used to track the number of personnel killed and wounded. As we reported last year and as noted in the joint report, contractors and assistance recipients generally have not been recording information on killed or wounded personnel in SPOT. According to the joint report, the SPOT program management office is working with users to explore ways of improving compliance by clarifying the terminology and expanding data fields. For example, there have been questions about whether deaths or injuries resulting from car accidents should be recorded in SPOT or if SPOT should only be used to track those killed or wounded while performing their contractual duties. SPOT program officials informed us that there has been some discussion of expanding the data fields in SPOT to include information like the date of injury or death and details surrounding the incident. However, officials told us these actions are still being discussed internally and no plans are in place to include such changes in upcoming versions of SPOT. Instead, DOD and State officials said they are helping contractors and assistance instrument recipients gain a better understanding of the requirement to report killed or wounded personnel using SPOT. Additionally, State officials told us that they have begun entering information into SPOT on killed and wounded personnel based on information provided by contractors and assistance recipients and anticipate using the data in SPOT to prepare future joint reports. In 2008, DOD, State, and USAID designated SPOT as their system of record for tracking statutorily required information on contracts and contractor personnel in Iraq and Afghanistan, a designation they reaffirmed in 2010 when the requirement was expanded to include assistance instruments and personnel. Yet the agencies still do not have reliable sources and methods to report on contracts, assistance instruments, and associated personnel in Iraq and Afghanistan. This is evidenced by the fact that the agencies could not reliably use data from SPOT to prepare their first joint report and instead relied on other data sources and methods that had significant limitations. Over the years, we have reported on the limitations associated with SPOT’s implementation and the agencies’ resulting decisions to rely on other methods of collecting and reporting data that have their own shortcomings. We recommended in 2009 that the agencies develop a joint plan with associated time frames to address limitations and ensure SPOT’s implementation to fulfill statutory requirements. The agencies disagreed with the need for the plan, citing ongoing coordination efforts as sufficient. While the agencies’ recent modifications to SPOT help address some limitations, such as those related to tracking local nationals, other limitations persist that undermine SPOT’s ability to fulfill statutory reporting requirements. Further, while agency officials have recognized some benefits of using SPOT to help manage, oversee, and coordinate contracts, assistance instruments, and associated personnel, their ability to do so has been hindered by SPOT’s shortcomings. Our prior recommendation for a joint plan was intended to provide an opportunity for the agencies to work with potential users of the data to better understand their information needs and determine how best to proceed with defined roles, responsibilities, and associated time frames that could help hold the agencies accountable and ensure timely implementation. We were concerned that without such a plan, SPOT’s implementation would continue to languish with the agencies not collecting statutorily required information in a reliable manner, either using SPOT or other sources. Based on our review of the agencies’ joint report, we continue to have this concern and are uncertain when SPOT will be fully implemented and serve as a reliable source of data for management, oversight, and coordination. We have, therefore, concluded that the recommendation from our 2009 report still applies, and we are not making any new recommendations. We requested comments on a draft of this report from DOD, State, and USAID. The three agencies informed us that they had no comments on the draft’s findings or concluding observations. DOD and State provided us with technical comments that we incorporated into the final report, as appropriate. We are sending copies of this report to the Secretary of Defense, the Secretary of State, and the Administrator of the U.S. Agency for International Development, as well as interested congressional committees. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or huttonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix I. In addition to the contact named above, Johana R. Ayers, Assistant Director; E. Brandon Booth; Virginia Chanley; Julia Kennon; Gilbert Kim; Angie Nichols-Friedman; Anne McDonough-Hughes; Margaret McKenna; Robert Swierczek; Michael Rohrback; and Alyssa Weir made key contributions to this report.
DOD, State, and USAID have relied extensively on contracts and assistance instruments (grants and cooperative agreements) for a range of services in Iraq and Afghanistan. In the last 3 years, GAO has provided information on the agencies' contracts, assistance instruments, and associated personnel in the two countries, detailing the agencies' challenges tracking such information. Amendments from the National Defense Authorization Act for Fiscal Year 2011 now require the agencies to provide this and other information to Congress through annual joint reports. They also direct GAO to review those reports. In response, GAO reviewed the first joint report and assessed (1) data and data sources used to prepare the report; (2) use of data from the Synchronized Predeployment and Operational Tracker (SPOT) for management, oversight, and coordination; and (3) efforts to improve SPOT's tracking of statutorily required information. GAO compared data in the joint report to agency data GAO previously obtained, reviewed supporting documentation, and interviewed agency officials, including those in Iraq and Afghanistan, on how the data were collected and used. The Departments of Defense (DOD) and State and the U.S. Agency for International Development (USAID) designated SPOT as their system in 2010 for tracking statutorily required information on contracts, assistance instruments, and associated personnel in Iraq and Afghanistan. Citing limitations with SPOT's implementation, the agencies generally relied on data sources other than SPOT to prepare their 2011 joint report. Only State used SPOT but just for its contractor personnel numbers. However, GAO found that regardless of the data source used, the agencies' data had significant limitations, many of which were not fully disclosed. For example, while the agencies collectively reported $22.7 billion in fiscal year 2010 obligations, we found that they underreported the value of Iraq and Afghanistan contracts and assistance instruments by at least $4 billion, the majority of which was for DOD contracts. In addition, data presented in the joint report on personnel, including those performing security functions, are of limited reliability because of significant over- and undercounting. For example, DOD did not disclose that its contractor personnel numbers for Afghanistan were overreported for most of the reporting period because of double counting. Additionally, despite the reporting requirement, State did not provide information on its assistance instruments or the number of personnel working under them. As a result of such limitations, data presented in the joint report should not be used to draw conclusions or identify trends over time. DOD, State, and USAID have used SPOT to a limited extent, primarily to manage and oversee individual contracts and personnel. Agency officials cited instances of using SPOT to help identify contractors that should be billed for the use of government services, including medical treatment and dining facilities. State and DOD officials also identified instances of using SPOT to help inform operational planning, such as preparing for the drawdown of U.S. forces in Iraq. Officials from the three agencies indicated that shortcomings in data and reporting capabilities have limited their use of SPOT and, in some cases, led them to rely on other data systems to help manage and oversee contracts and assistance instruments. Further, the agencies cannot readily access each other's data in SPOT, which limits interagency coordination opportunities. Recent efforts have been made to improve SPOT's tracking of contractor and assistance personnel. SPOT now allows users to enter aggregate, rather than individual personal information into SPOT, which may overcome resistance to using the system based on security concerns. In addition, DOD and State report increased efforts to validate personnel data in SPOT. However, practical and technical challenges continue to affect SPOT's ability to track other statutorily required data. For example, SPOT cannot be used to reliably distinguish personnel performing security functions from other contractors. Also, while SPOT has the capability to record when personnel have been killed or wounded, such information has not been regularly updated. The agencies have identified the need for further modifications and new guidance to address some but not all of these limitations. It is unclear when SPOT will serve as a reliable source of data to meet statutory requirements and be used by the agencies for management, oversight, and coordination. As a result, the agencies still do not have reliable sources and methods to report on contracts, assistance instruments, and associated personnel in Iraq and Afghanistan. In 2009, GAO recommended that DOD, State, and USAID develop a plan for addressing SPOT's limitations. The agencies disagreed, citing ongoing coordination as sufficient. GAO continues to believe such a plan is needed and is not making new recommendations.
Our analysis of initial estimates of Recovery Act spending provided by the Congressional Budget Office (CBO) suggested that about $49 billion would be outlayed to states and localities by the federal government in fiscal year 2009, which runs through September 30. However, our analysis of actual federal outlays reported on www.recovery.gov at the time of our bimonthly review indicated that in the 4 months since enactment, the federal Treasury had paid out approximately $29 billion to states and localities, which was about 60 percent of the payments estimated for fiscal year 2009. Since the release of our July report, an additional $16 billion in Recovery Act funds has been outlayed to states and localities during that period for a total of almost $45 billion as of August 28, 2009. Although this pace of spending may not continue for the remainder of the fiscal year, at present spending is slightly ahead of the original estimates. Figure 1 shows the original estimate of federal outlays to states and localities under the Recovery Act compared with actual federal outlays as reported by federal agencies on www.recovery.gov. More than three quarters of the $45 billion in federal outlays has been provided through the increased Federal Medical Assistance Percentage (FMAP) grant awards and the State Fiscal Stabilization Fund administered by the Department of Education. Figure 1 shows actual federal outlays as of August 28, 2009 and the original estimate. According to the Office of Management and Budget (OMB), an estimated $149 billion in Recovery Act funding will be obligated to states and localities in fiscal year 2009. Our work for our July bimonthly report focused on nine federal programs, selected primarily because they have begun disbursing funds to states and include programs with significant amounts of Recovery Act funds, programs receiving significant increases in funding, and new programs. Recovery Act funding of some of these programs is intended for further disbursement to localities. Together, these nine programs are estimated to account for approximately 87 percent of federal Recovery Act outlays to state and localities in fiscal year 2009. Figure 2 shows the distribution by program of anticipated federal Recovery Act spending in fiscal year 2009 to states and localities. The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008, and December 31, 2010. On February 25, 2009, CMS made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. For the third quarter of fiscal year 2009, the increases in FMAP for the 16 states and the District of Columbia compared with the original fiscal year 2009 levels are estimated to range from 6.2 percentage points in Iowa to 12.24 percentage points in Florida, with the FMAP increase averaging just over 10 percentage points. When compared with the first two quarters of fiscal year 2009, the FMAP in the third quarter of fiscal year 2009 increased in 12 of the 16 states and the District. From October 2007 to May 2009, overall Medicaid enrollment in the 16 states and the District increased by 7 percent. In addition, each of the states and the District experienced an enrollment increase during this period, with the highest number of programs experiencing an increase of 5 percent to 10 percent. However, the percentage increase in enrollment varied widely ranging from just under 3 percent in California to nearly 20 percent in Colorado. Since our July report and with regard to the states’ receipt of the increased FMAP, all 16 states and the District had drawn down increased FMAP grant awards of just over $19.6 billion for the period of October 1, 2008 through September 4, 2009, which amounted to almost 84 percent of funds available. In addition, except for the initial weeks that increased FMAP funds were available, the weekly rate at which the sample states and the District have drawn down these funds has remained relatively constant. States reported that they are using or are planning to use the funds that have become freed up as a result of increased FMAP for a variety of purposes. Most commonly, states reported that they are using or planning to use freed-up funds to cover their increased Medicaid caseload, to maintain current benefits and eligibility levels, and to help finance their respective state budgets. Several states noted that given the poor economic climate in their respective states, these funds were critical in their efforts to maintain Medicaid coverage at current levels. While officials from several states spoke positively about CMS’s guidance related to FMAP requirements, over half of the states and the District reported they wanted CMS to provide additional guidance regarding how they report monthly on increased FMAP spending and whether certain programmatic changes would affect their eligibility for funds. For example, Medicaid officials from several states told us they were hesitant to implement minor programmatic changes, such as changes to prior authorization requirements, pregnancy verifications, or ongoing rate changes, out of concern that doing so would jeopardize their eligibility for increased FMAP. In addition, at least three states raised concerns that glitches related to new or updated information systems used to generate provider payments could affect their eligibility for these funds. Due to the variability of state operations, funding processes, and political structures, CMS has worked with states on a case-by-case basis to discuss and resolve issues that arise. Specifically, communications between CMS and several states indicate efforts to clarify issues related to the contributions to the state share of Medicaid spending by political subdivisions or to rainy-day funds. Since we issued our July report, on July 30, 2009, CMS published new guidance for states regarding the prompt payment requirement. The guidance describes the method states should use to calculate days during a quarter that states have either met or not met the prompt payment requirement in the Medicaid statute; and how a state could obtain a waiver from the requirement. More recently, CMS published new guidance clarifying the maintenance of eligibility requirements under the Recovery Act, which includes a discussion of programmatic changes that could affect states’ eligibility for the increased FMAP. The Recovery Act provides funding to the states for restoration, repair, and construction of highways and other eligible surface transportation projects. The act requires that 30 percent of these funds be suballocated, primarily based on population, for metropolitan, regional, and local use. In March 2009, $26.7 billion was apportioned to all 50 states and the District of Columbia (District) for highway infrastructure and other eligible projects. More recently, as of September 1, 2009, $18 billion of the funds had been obligated for almost 7,000 projects nationwide, and approximately $11 billion had been obligated for almost 3,800 projects in the 16 states and the District that are the focus of GAO’s review. Almost half of Recovery Act highway obligations nationwide have been for pavement improvements. Specifically, $8.7 billion of the $ 18 billion obligated nationwide is being used for projects such as reconstructing or rehabilitating deteriorated roads. Many state officials told us they selected a large percentage of resurfacing and other pavement improvement projects because they did not require extensive environmental clearances, were quick to design, could be quickly obligated and bid, could employ people quickly, and could be completed within 3 years. In addition, $3 billion, or about 16 percent of Recovery Act funds nationally, has been obligated for pavement-widening projects and around 10 percent has been obligated for the replacement, improvement or rehabilitation of bridges. As of September 1, 2009, $1.4 billion had been reimbursed nationwide by the Federal Highway Administration (FHWA) and $604 million had been reimbursed in the 16 states and the District. States are just beginning to get projects awarded so that contractors can begin work, and U.S. Department of Transportation (DOT) officials told us that although funding has been obligated for almost 7,000 projects, it may be months before states can request reimbursement. Once contractors mobilize and begin work, states make payments to these contractors for completed work, and may request reimbursement from FHWA. FHWA told us that once funds are obligated for a project, it may take 2 or more months for a state to bid and award the work to a contractor and have work begin. According to state officials, because an increasing number of contractors are looking for work, bids for Recovery Act contracts have come in under estimates. State officials told us that bids for the first Recovery Act contracts were ranging from around 5 percent to 30 percent below the estimated cost. Several state officials told us they expect this trend to continue until the economy substantially improves and contractors begin taking on enough other work. Funds appropriated for highway infrastructure spending must be used as required by the Recovery Act. States are required to do the following: Ensure that 50 percent of apportioned Recovery Act funds are obligated within 120 days of apportionment (before June 30, 2009) and that the remaining apportioned funds are obligated within 1 year. The 50 percent rule applies only to funds apportioned to the state and not to the 30 percent of funds required by the Recovery Act to be suballocated, primarily based on population, for metropolitan, regional, and local use. The Secretary of Transportation is to withdraw and redistribute to other states any amount that is not obligated within these time frames. Give priority to projects that can be completed within 3 years and to projects located in economically distressed areas. These areas are defined by the Public Works and Economic Development Act of 1965, as amended. According to the act, to qualify as economically distressed, an area must have (1) a per capita income that is 80 percent or less than the national average or (2) an unemployment rate that is, for the most recent 24-month period for which data are available, at least 1 percent greater than the national average. For areas that do not meet one of these two criteria, the Secretary of Commerce has the authority to determine that an area has experienced or is about to experience a “special need” arising from actual or threatened severe unemployment or economic adjustment problems. Certify that the state will maintain the level of spending for the types of transportation projects funded by the Recovery Act that it planned to spend the day the Recovery Act was enacted. As part of this certification, the governor of each state is required to identify the amount of funds the state plans to expend from state sources from February 17, 2009, through September 30, 2010. All states have met the first Recovery Act requirement that 50 percent of their apportioned funds are obligated within 120 days. Of the $18.7 billion nationally that is subject to this provision, 75 percent was obligated as of September 1, 2009. The second Recovery Act requirement is to give priority to projects that can be completed within 3 years and to projects located in economically distressed areas. While officials from almost all of the states we reviewed said that they considered project readiness, including the 3-year completion requirement, when making project selections, there was substantial variation in the extent to which states prioritized projects in economically distressed areas and how they identified these areas. Many states based their project selections on other factors and only later identified whether these projects were in economically distressed areas. We reported in July that DOT and FHWA had not provided clear guidance—while officials emphasized the importance of giving priority to these areas, it did not define what giving priority meant, and thus did not ensure that the act’s priority provisions would be consistently applied. We also found instances of states developing their own eligibility requirements for economically distressed areas using data or criteria not specified in the Public Works and Economic Development Act. For example, one state identified these areas based in part on home foreclosure rates—data not specified in the Public Works Act. In each of the cases we identified, the states informed us that FHWA approved the state’s use of alternative criteria. However, FHWA did not consult with or seek the approval of the Department of Commerce, and it was not clear under what authority FHWA approved these criteria. As a result we recommended that the Secretary of Transportation, in consultation with the Secretary of Commerce, develop (1) clear guidance on identifying and giving priority to economically distressed areas, and (2) more consistent procedures for FHWA to use in reviewing and approving states’ criteria for designating distressed areas. In response to the recommendation in our July report, FHWA, in consultation with the Department of Commerce, developed guidance that addresses our recommendation. In particular, FHWA’s August 2009 guidance directs states to give priority to projects that are located in an economically distressed area and can be completed within the 3-year timeframe over other projects. In the guidance, FHWA also directs states to maintain information as to how they identified, vetted, examined, and selected projects located in economically distressed areas. In addition, FHWA’s guidance sets out criteria that states may use to identify economically distressed areas based on “special need.” The criteria aligns closely with criteria used by the Department of Commerce’s Economic Development Administration (EDA) in designating special needs areas in its own grant programs, including factors such as actual or threatened business closures (including job loss thresholds), military base closures, and natural disasters or emergencies. According to EDA, while the agency traditionally approves special needs designations on a case-by-case basis for its own grant program, it does not have the resources to do so for the purpose of Recovery Act highway funding. Rather, in supplemental guidance issued August 24, 2009, FHWA required states to document their reliance on “special need” criteria and provide the documentation to FHWA Division Offices, thereby making the designation of new “special need” areas for the for Recovery Act highway funding “self executing” by the states, meaning the states will apply the criteria laid out in the guidance to identify these areas. We plan to continue to monitor FHWA’s and the states’ implementation of the economically distressed area requirement, including the states’ application of the special needs criteria, in our future reviews. Finally, the states are required to certify that they will maintain the level of state effort for programs covered by the Recovery Act. With one exception, the states have completed these certifications, but they face challenges. Maintaining a state’s level of effort can be particularly important in the highway program. We have found that the preponderance of evidence suggests that increasing federal highway funds influences states and localities to substitute federal funds for funds they otherwise would have spent on highways. As we previously reported, substitution makes it difficult to target an economic stimulus package so that it results in a dollar-for-dollar increase in infrastructure investment. Most states revised the initial certifications they submitted to DOT. As we reported in April, many states submitted explanatory certifications—such as stating that the certification was based on the “best information available at the time”—or conditional certifications, meaning that the certification was subject to conditions or assumptions, future legislative action, future revenues, or other conditions. On April 22, 2009, the Secretary of Transportation sent a letter to each of the nation’s governors and provided additional guidance, including that conditional and explanatory certifications were not permitted, and gave states the option of amending their certifications by May 22. All states and the District have submitted their certifications. According to DOT officials, the department has concluded that the form of each certification is consistent with the additional guidance. While DOT has concluded that the form of the revised certifications is consistent with the additional guidance, it is evaluating the states’ method of calculating the amounts they planned to expend for the covered programs and the reasonableness of these numbers. States face drastic fiscal challenges, and most states are estimating that their fiscal year 2009 and 2010 revenue collections will be well below estimates. In the face of these challenges, some states told us that meeting the maintenance-of-effort requirements over time poses significant challenges. For example, federal and state transportation officials in Illinois told us that to meet its maintenance-of-effort requirements in the face of lower-than-expected fuel tax receipts, the state would have to use general fund or other revenues to cover any shortfall in the level of effort stated in its certification. Mississippi transportation officials are concerned about the possibility of statewide, across-the-board spending cuts in 2010. According to the Mississippi transportation department’s budget director, the agency will try to absorb any budget reductions in 2010 by reducing administrative expenses to maintain the state’s level of effort. The Recovery Act created a State Fiscal Stabilization Fund (SFSF) in part to help state and local governments stabilize their budgets by minimizing budgetary cuts in education and other essential government services, such as public safety. Beginning in March 2009, the Department of Education issued a series of fact sheets, letters, and other guidance to states on the SFSF. Specifically, a March fact sheet, the Secretary’s April letter to Governors, and program guidance issued in April and May mention that the purposes of the SFSF include helping stabilize state and local budgets, avoiding reductions in education and other essential services, and ensuring LEAs and public IHEs have resources to “avert cuts and retain teachers and professors.” The documents also link educational progress to economic recovery and growth and identify four principles to guide the distribution and use of Recovery Act funds: (1) spend funds quickly to retain and create jobs; (2) improve student achievement through school improvement and reform; (3) ensure transparency, public reporting, and accountability; and (4) invest one-time Recovery Act funds thoughtfully to avoid unsustainable continuing commitments after the funding expires, known as the “funding cliff.” After meeting assurances to maintain state support for education at least at fiscal year 2006 levels, states are required to use the education stabilization fund to restore state support to the greater of fiscal year 2008 or 2009 levels for elementary and secondary education, public IHEs, and, if applicable, early childhood education programs. States must distribute these funds to school districts using the primary state education formula but maintain discretion in how funds are allocated to public IHEs. If, after restoring state support for education, additional funds remain, the state must allocate those funds to school districts according to the Elementary and Secondary Education Act of 1965 (ESEA), Title I, Part A funding formula. On the other hand, if a state’s education stabilization fund allocation is insufficient to restore state support for education, then a state must allocate funds in proportion to the relative shortfall in state support to public school districts and public IHEs. Education stabilization funds must be allocated to school districts and public IHEs and cannot be retained at the state level. Once education stabilization funds are awarded to school districts and public IHEs, they have considerable flexibility over how they use those funds. School districts are allowed to use education stabilization funds for any allowable purpose under ESEA, the Individuals with Disabilities Education Act (IDEA), the Adult Education and Family Literacy Act, or the Carl D. Perkins Career and Technical Education Act of 2006 (Perkins Act), subject to some prohibitions on using funds for, among other things, sports facilities and vehicles. In particular, Education’s guidance states that because allowable uses under the Impact Aid provisions of ESEA are broad, school districts have discretion to use education stabilization funds for a broad range of things, such as salaries of teachers, administrators, and support staff, and purchases of textbooks, computers, and other equipment. The Recovery Act allows public IHEs to use education stabilization funds in such a way as to mitigate the need to raise tuition and fees, as well as for the modernization, renovation, and repair of facilities, subject to certain limitations. However, the Recovery Act prohibits public IHEs from using education stabilization funds for such things as increasing endowments; modernizing, renovating, or repairing sports facilities; or maintaining equipment. Education’s SFSF guidance expressly prohibits states from placing restrictions on LEAs’ use of education stabilization funds, beyond those in the law, but allows states some discretion in placing limits on how IHEs may use these funds. The SFSF provides states and school districts with additional flexibility, subject to certain conditions, to help them address fiscal challenges. For example, the Secretary of Education is granted authority to permit waivers of state maintenance-of-effort (MOE) requirements if a state certified that state education spending will not decrease as a percentage of total state revenues. Education issued guidance on the MOE requirement, including the waiver provision, on May 1, 2009. Also, the Secretary may permit a state or school district to treat education stabilization funds as nonfederal funds for the purpose of meeting MOE requirements for any program administered by Education, subject to certain conditions. States have broad discretion over how the $8.8 billion in the SFSF government services fund are used. The Recovery Act provides that these funds must be used for public safety and other government services and that these services may include assistance for education, as well as modernization, renovation, and repairs of public schools or IHEs. On April 1, 2009, Education made at least 67 percent of each state’s SFSF funds available, subject to the receipt of an application containing state assurances, information on state levels of support for education and estimates of restoration amounts, and baseline data demonstrating state status on each of the four education reform assurances. If a state could not certify that it would meet the MOE requirement, Education required it to certify that it will meet requirements for receiving a waiver—that is, that education spending would not decrease relative to total state revenues. In determining state level of support for elementary and secondary education, Education required states to use their primary formula for distributing funds to school districts but also allowed states some flexibility in broadening this definition. For IHEs, states have some discretion in how they establish the state level of support, with the provision that they cannot include support for capital projects, research and development, or amounts paid in tuition and fees by students. In order to meet statutory requirements for states to establish their current status regarding each of the four required programmatic assurances, Education provided each state with the option of using baseline data Education had identified or providing another source of baseline data. Some of the data provided by Education was derived from self-reported data submitted annually by the states to Education as part of their Consolidated State Performance Reports (CSPR), but Education also relied on data from third parties, including the Data Quality Campaign (DQC), the National Center for Educational Achievement (NCEA), and Achieve. Education has reviewed applications as they arrive for completeness and has awarded states their funds once it determined all assurances and required information had been submitted. Education set the application deadline for July 1, 2009. On June 24, 2009, Education issued guidance to states informing them they must amend their applications if there are changes to the reported levels of state support that were used to determine maintenance of effort or to calculate restoration amounts. As an update to our July report, as of September 1, 2009, the District and 15 of the states covered by our review had received approval from Education for their initial SFSF funding applications. Pennsylvania had submitted an application to Education but it had not yet been approved. As of August 28, 2009, Education had made $21 billion in SFSF grants for Education available to the 15 states and the District. As of August 28, 2009, 14 of these states had drawn down SFSF Recovery Act funds. In total, over $7.7 billion, or about 36 percent of available funds had been drawn down by these states as of August 28, 2009. Three of the selected states—Florida, Massachusetts, and New Jersey— said they would not meet the maintenance-of-effort requirements but would meet the eligibility requirements for a waiver and that they would apply for a waiver. Most of the states’ applications show that they plan to provide the majority of education stabilization funds to LEAs, with the remainder of funds going to IHEs. Several states and the District of Columbia estimated in their application that they would have funds remaining beyond those that would be used to restore education spending in fiscal years 2009 and 2010. These funds can be used to restore education spending in fiscal year 2011, with any amount left over to be distributed to LEAs. States have flexibility in how they allocate education stabilization funds among IHEs but, once they establish their state funding formula, not in how they allocate the funds among LEAs. Florida and Mississippi allocated funds among their IHEs, including universities and community colleges, using formulas based on factors such as enrollment levels. Other states allocated SFSF funds taking into consideration the budget conditions of the IHEs. Regarding LEAs, most states planned to allocate funds based on states’ primary funding formulae. Many states are using a state formula based on student enrollment weighted by characteristics of students and LEAs. For example, Colorado’s formula accounts for the number of students at risk while the formula used by the District allocates funds to LEAs using weights for each student based on the relative cost of educating students with specific characteristics. For example, an official from Washington, D.C. Public Schools said a student who is an English language learner may cost more to educate than a similar student who is fluent in English. States may use the government services portion of SFSF for education but have discretion to use the funds for a variety of purposes. Officials from Florida, Illinois, New Jersey, and New York reported that their states plan to use some or most of their government services funds for educational purposes. Other states are applying the funds to public safety. For example, according to state officials, California is using the government services fund for it corrections system, and Georgia will use the funds for salaries of state troopers and staff of forensic laboratories and state prisons. Officials in many school districts told us that SFSF funds would help offset state budget cuts and would be used to maintain current levels of education funding. However, many school district officials also reported that using SFSF funds for education reforms was challenging given the other more pressing fiscal needs. Although their plans are generally not finalized, officials in many school districts we visited reported that their districts are preparing to use SFSF funds to prevent teacher layoffs, hire new teachers, and provide professional development programs. Most school districts will use the funding to help retain jobs that would have been cut without SFSF funding. For example, Miami Dade officials estimate that the stabilization funds will help them save nearly two thousand teaching positions. State and school district officials in eight states we visited (California, Colorado, Florida, Georgia, Massachusetts, Michigan, New York, and North Carolina) also reported that SFSF funding will allow their state to retain positions, including teaching positions that would have been eliminated without the funding. In the Richmond County School System in Georgia, officials noted they plan to retain positions that support its schools, such as teachers, paraprofessionals, nurses, media specialists and guidance counselors. Local officials in Mississippi reported that budget-related hiring freezes had hindered their ability to hire new staff, but because of SFSF funding, they now plan to hire. In addition, local officials in a few states told us they plan to use the funding to support teachers. For example, officials in Waterloo Community and Ottumwa Community School Districts in Iowa as well as officials from Miami-Dade County in Florida cited professional development as a potential use of funding to support teachers. Although school districts are preventing layoffs and continuing to provide educational services with the SFSF funding, most did not indicate they would use these funds to pursue educational reform. School district officials cited a number of barriers, which include budget shortfalls, lack of guidance from states, and insufficient planning time. In addition to retaining and creating jobs, school districts have considerable flexibility to use these resources over the next 2 years to advance reforms that could have long-term impact. However, a few school district officials reported that addressing reform efforts was not in their capacity when faced with teacher layoffs and deep budget cuts. In Flint, Michigan, officials reported that SFSF funds will be used to cope with budget deficits rather than to advance programs, such as early childhood education or repairing public school facilities. According to the Superintendent of Flint Community Schools, the infrastructure in Flint is deteriorating, and no new school buildings have been built in over 30 years. Flint officials said they would like to use SFSF funds for renovating buildings and other programs, but the SFSF funds are needed to maintain current education programs. Officials in many school districts we visited reported having inadequate guidance from their state on using SFSF funding, making reform efforts more difficult to pursue. School district officials in most states we visited reported they lacked adequate guidance from their state to plan and report on the use of SFSF funding. Without adequate guidance and time for planning, school district officials told us that preparing for the funds was difficult. At the time of our visits, several school districts were unaware of their funding amounts, which, officials in two school districts said, created additional challenges in planning for the 2009-2010 school year. One charter school we visited in North Carolina reported that layoffs will be required unless their state notifies them soon how much SFSF funding they will receive. State officials in North Carolina, as well as in several other states, told us they are waiting for the state legislature to pass the state budget before finalizing SFSF funding amounts for school districts. Although many IHEs had not finalized plans for using SFSF funds, the most common expected use for the funds at the IHEs we visited was to pay salaries of IHE faculty and staff. Officials at most of the IHEs we visited told us that, due to budget cuts, their institutions would have faced difficult reductions in faculty and staff if they were not receiving SFSF funds. Other IHEs expected to use SFSF funds in the future to pay salaries of certain employees during the year. Several IHEs we visited are considering other uses for SFSF funds. Officials at the Borough of Manhattan Community College in New York City want to use some of their SFSF funds to buy energy saving light bulbs and to make improvements in the college’s very limited space such as, by creating tutoring areas and study lounges. Northwest Mississippi Community College wants to use some of the funds to increase e-learning capacity to serve the institution’s rapidly increasing number of students. Several other IHEs plan to use some of the SFSF funds for student financial aid. Because many IHEs expect to use SFSF funds to pay salaries of current employees that they likely would not have been able to pay without the SFSF funds, IHEs officials said that SFSF funds will save jobs. Officials at several IHEs noted that this will have a positive impact on the educational environment such as, by preventing increases in class size and enabling the institutions to offer the classes that students need to graduate. In addition to preserving existing jobs, some IHEs anticipate creating jobs with SFSF funds. Besides saving and creating jobs at IHEs, officials noted that SFSF monies will have an indirect impact on jobs in the community. IHE officials also noted that SFSF funds will indirectly improve employment because some faculty being paid with the funds will help unemployed workers develop new skills, including skills in fields, such as health care, that have a high demand for trained workers. State and IHE officials also believe that SFSF funds are reducing the size of tuition and fee increases. Our report provides additional details on the use of Recovery Act funds for these three programs in the 16 selected states and the District. In addition to Medicaid FMAP, Highway Infrastructure Investment, and SFSF, we also reviewed six other programs receiving Recovery Act funds. These programs are: Title I, Part A of the Elementary and Secondary Education Act of 1965 (ESEA) Parts B and C of the Individuals with Disabilities Education Act (IDEA) Workforce Investment Act (WIA) Youth Program Public Housing Capital Fund Edward Byrne Memorial Justice Assistance Grant (JAG) Program Weatherization Assistance Program Additional detail regarding the states’ and localities’ use of funds for these programs is available in the full report, GAO-09-829. Individual state summaries for the 16 selected states and the District are accessible through GAO’s recovery page at www.gao.gov/recovery and in an electronic supplement, GAO-09-830SP. State revenue continued to decline and states used Recovery Act funding to reduce some of their planned budget cuts and tax increases to close current and anticipated budget shortfalls for fiscal years 2009 and 2010. Of the 16 states and the District, 15 estimate fiscal year 2009 general fund revenue collections will be less than in the previous fiscal year. For two of the selected states —Iowa and North Carolina—revenues were lower than projected but not less than the previous fiscal year. As shown in figure 3, data from the Bureau of Economic Analysis (BEA) also indicate that the rate of state and local revenue growth has generally declined since the second quarter of 2005, and the rate of growth has been negative in the fourth quarter of 2008 and the first quarter of 2009. Officials in most of the selected states and the District expect these revenue trends to contribute to budget gaps (estimated revenues less than estimated disbursements) anticipated for future fiscal years. All of the 16 states and the District forecasted budget gaps in state fiscal year 2009-2010 before budget actions were taken. Consistent with one of the purposes of the act, states’ use of Recovery Act funds to stabilize their budgets helped them minimize and avoid reductions in services as well as tax increases. States took a number of actions to balance their budgets in fiscal year 2009-2010, including sta layoffs, furloughs, and program cuts. The use of Recovery Act funds affected the size and scope of some states’ budgeting decisions, and many of the selected states reported they would have had to make further cuts to services and programs without the receipt of Recovery Act funds. For example, California, Colorado, Georgia, Illinois, Massachusetts, Michig New York, and Pennsylvania budget officials all stated that current or future budget cuts would have been deeper without the receipt of Recovery Act funds. Recovery Act funds helped cushion the impact of states’ planned budget actions but officials also cautioned that current revenue estimates indicate that additional state actions will be needed to balance future-year budgets. Future actions to stabilize state budgets will require continued awareness of the maintenance-of-effort (MOE) requirements for some federal programs funded by the Recovery Act. For example, Massachusetts officials expressed concerns regarding MOE requirements attached to federal programs, including those funded through the Recovery Act, as future across-the-board spending reductions could pose challenges for maintaining spending levels in these programs. State officials said that MOE requirements that require maintaining spending levels based upon prior-year fixed dollar amounts will pose more of a challenge than upholding spending levels based upon a percentage of program spending relative to total state budget expenditures. In addition, some states also reported accelerating their use of Recovery Act funds to stabilize deteriorating budgets. Many states, such as Colorado, Florida, Georgia, Iowa, New Jersey, and North Carolina, also reported tapping into their reserve or rainy-day funds in order to balance their budgets. In most cases, the receipt of Recov Act funds did not prevent the selected states from tapping into their reserve funds, but a few states reported that without the receipt of Recovery Act funds, withdrawals from reserve funds would have been greater. Officials from Georgia stated that although they have already used reserve funds to balance their fiscal year 2009 and 2010 budgets, they may use additional reserve funds if, at the end of fiscal year 2009, revenues are lower than the most recent projections. In contrast, New York officials stated they were able to avoid tapping into the state’s reserve funds due to the funds made available as a result of the increased Medicaid FMAP funds provided by the Recovery Act. States’ approaches to developing exit strategies for the use of Recovery Act funds reflect the balanced-budget requirements in place for all of our selected states and the District. Budget officials referred to the temporary nature of the funds and fiscal challenges expected to extend beyond the timing of funds provided by the Recovery Act. Officials discussed a desire to avoid what they referred to as the “cliff effect” associated with the dates when Recovery Act funding ends for various federal programs. Budget officials in some of the selected states are preparing for the end of Recovery Act funding by using funds for nonrecurring expenditures and hiring limited-term positions to avoid creating long-term liabilities. A few states reported that although they are developing preliminary plans for the phasing out of Recovery Act funds, further planning has been delayed until revenue and expenditure projections are finalized. Given that Recovery Act funds are to be distributed quickly, effective internal controls over use of funds are critical to help ensure effective and efficient use of resources, compliance with laws and regulations, and in achieving accountability over Recovery Act programs. Internal controls include management and program policies, procedures, and guidance that help ensure effective and efficient use of resources; compliance with laws and regulations; prevention and detection of fraud, waste, and abuse; and the reliability of financial reporting. Management is responsible for the design and implementation of internal controls and the states in our sample have a range of approaches for implementing their internal controls. Some states have internal control requirements in their state statutes and others have undertaken internal control programs as management initiatives. In our sample, 7 states—California, Colorado, Florida, Michigan, Mississippi, New York, and North Carolina—have statutory requirements for internal control programs and activities. An additional 9 states—Arizona, Georgia, Illinois, Iowa, Massachusetts, New Jersey, Ohio, Pennsylvania, and Texas—have undertaken various internal control programs. In addition, the District of Columbia has taken limited actions related to its internal control program. An effective internal control program helps manage change in response to shifting environments and evolving demands and priorities, such as changes related to implementing the Recovery Act. Risk assessment and monitoring are key elements of internal controls, and the states in our sample and the District have undertaken a variety of actions in these areas. Risk assessment involves performing comprehensive reviews and analyses of program operations to determine if internal and external risks exist and to evaluate the nature and extent of risks which have been identified. Approaches to risk analysis can vary across organizations because of differences in missions and the methodologies used to qualitatively and quantitatively assign risk levels. Monitoring activities include the systemic process of reviewing the effectiveness of the operation of the internal control system. These activities are conducted by management, oversight entities, and internal and external auditors. Monitoring enables stakeholders to determine whether the internal control system continues to operate effectively over time. Monitoring also provides information and feedback to the risk assessment process. States and localities are responsible for tracking and reporting on Recovery Act funds. OMB has issued guidance to the states and localities that provides for separate identification—”tagging”—of Recovery Act funds so that specific reports can be created and transactions can be specifically identified as Recovery Act funds. The flow of federal funds to the states varies by program, the grantor agencies have varied grants management processes and grants vary substantially in their types, purposes, and administrative requirements. Several states and the District of Columbia have created unique codes for their financial systems in order to tag the Recovery Act funds. Most state and local program officials told us that they will apply existing controls and oversight processes that they currently apply to other program funds to oversee Recovery Act funds. In addition to being an important accountability mechanism, audit results can provide valuable information for use in management’s risk assessment and monitoring processes. The single audit report, prepared to meet the requirements of the Single Audit Act, as amended (Single Audit Act), is a source of information on internal control and compliance findings and the underlying causes and risks. The report is prepared in accordance with OMB’s implementing guidance in OMB Circular No. A-133, Audits of States, Local Governments, and Non-Profit Organizations, which provides guidance to auditors on selecting federal programs for audit and the related internal control and compliance audit procedures to be performed. In our April 23, 2009 report, we reported that the guidance and criteria in OMB Circular No. A-133 do not adequately address the substantial added risks posed by the new Recovery Act funding. Such risks may result from (1) new government programs, (2) the sudden increase in funds or programs that are new to the recipient entity, and (3) the expectation that some programs and projects will be delivered faster so as to inject funds into the economy. With some adjustment, the single audit could be an effective oversight tool for Recovery Act programs, addressing risks associated with all three of these factors. Our April 2009 report on the Recovery Act included recommendations that OMB adjust the current audit process to: focus the risk assessment auditors use to select programs to test for compliance with 2009 federal program requirements on Recovery Act funding; provide for review of the design of internal controls during 2009 over programs to receive Recovery Act funding, before significant expenditures in 2010; and evaluate options for providing relief related to audit requirements for low-risk programs to balance new audit responsibilities associated with the Recovery Act. Since April, although OMB has taken several steps in response to our recommendations, these actions do not sufficiently address the risks leading to our recommendations. To focus auditor risk assessments on Recovery Act-funded programs and to provide guidance on internal control reviews for Recovery Act programs, OMB is working within the framework defined by existing mechanisms—Circular No. A-133 and the Compliance Supplement. In this context, OMB has made limited adjustments to its single audit guidance and is planning to issue additional guidance later this month. On May 26, OMB issued the 2009 edition of the Circular A-133 Compliance Supplement. The new Compliance Supplement is intended to focus auditor risk assessment on Recovery Act funding by, among things (1) requiring that auditors specifically ask auditees about and be alert to expenditure of funds provided by the Recovery Act, and (2) providing an appendix that highlights some areas of the Recovery Act impacting single audits. The appendix adds a requirement that large programs and program clusters with Recovery Act funding cannot be assessed as low-risk for the purposes of program selection without clear documentation of the reasons they are considered low risk. It also calls for recipients to separately identify expenditures for Recovery Act programs on the Schedule of Expenditures of Federal Awards. OMB issued Compliance Supplement Addendum No. 1 on August 6, 2009 to provide additional guidance for programs (including clusters of programs with expenditures of Recovery Act funds). This addendum modifies the 2009 Compliance Supplement by indicating the new Recovery Act programs and new program clusters, providing new cross-cutting provisions related to the Recovery Act programs, and adding additional compliance requirements for existing programs as a result of Recovery Act funding. OMB Circular A-133 relies heavily on the amount of federal expenditures in a program during a fiscal year and whether findings were reported in the previous period to determine whether detailed compliance testing is required for that year. Although OMB is using clusters for single audit selection to make it more likely that Recovery Act programs would be selected as major programs subject to internal control and compliance testing, the dollar formulas for determining major programs have not changed. This approach may not provide sufficient assurance that smaller, but nonetheless significant, Recovery Act-funded programs would be selected for audit. To provide additional focus on internal control reviews, OMB issued guidance in early August that emphasizes the importance of prompt corrective action by management. This guidance also encourages early communication by auditors to management and those charged with governance of identified control deficiencies related to Recovery Act funding that are, or are likely to be, significant deficiencies or material weaknesses. Such early communication is intended to allow management to expedite corrective action and mitigate the risk of improper expenditure of federal awards. In our July report, we stated that OMB was encouraging communication of weaknesses to management early in the audit process, but did not add requirements for auditors to take these steps. This step was insufficient and did not address our concern that internal controls over Recovery Act programs should be reviewed before significant funding is expended. Under the current single audit framework and reporting timelines, the auditor evaluation of internal control and related reporting will occur too late—after significant levels of federal expenditures have already occurred. OMB is currently vetting a proposed pilot project under which a limited number of voluntarily participating auditors performing the single audits for states would communicate in writing internal control deficiencies noted in the single audit within six months of the 2009 fiscal year-end, rather than the nine months required by the Single Audit Act. As currently envisioned, an auditor participating in the pilot would report internal control deficiencies identified in the course of the single audit to state and federal officials within six months of the end of the audited entity’s fiscal year in order to achieve more timely accountability for selected Recovery Act-funded programs. Most states have a June 30 fiscal year-end; consequently most of the preliminary internal control communications would be due by December 31, 2009. Participating auditors would be required to focus audit procedures on Recovery Act-funded programs in accordance with guidelines prescribed by OMB. OMB would offer to waive Circular A-133’s requirement for risk assessment for smaller programs as an inducement to participate. OMB is moving ahead with the pilot and plans to identify the participating auditors and the programs that will be included by the end of September 2009. GAO believes that, if the pilot is properly implemented and achieves sufficient coverage of Recovery Act- funded programs, it may be effective in addressing concerns about the timeliness of single audit reporting related to internal control weaknesses in Recovery Act programs. The pilot is, however, still in its early stages and many surrounding issues are yet to be resolved. It is important to note that the pilot project is dependent on voluntary participation, which could impact OMB’s ability to achieve sufficient scope and coverage for the project to meet its objectives. While OMB has noted the increased responsibilities falling on those responsible for performing single audits, it has not issued any proposals or plans to address this recommendation to date. A recent survey conducted by the staff of the National State Auditors’ Association (NSAA) highlighted the need for relief to over-burdened state audit organizations that have experienced staffing reductions and furloughs. In addition, states volunteering to participate in OMB’s proposed pilot program will be granted some relief in the workload because the auditor will not be required to perform risk assessments of smaller federal programs. Auditors conduct these risk assessments as part of the planning process to identify which federal programs will be subject to detailed internal control and compliance testing. We believe that this step alone will not provide sufficient relief to balance out additional audit requirements for Recovery Act programs. Without action now audit coverage of Recovery Act programs will not be sufficient to address Recovery Act risks and the audit reporting that does occur will be after significant expenditures have already occurred. Congress is considering a bill that could provide some financial relief to auditors lacking the staff capacity necessary to handle the increased audit responsibilities associated with the Recovery Act. S. 1064 which is currently before this Committee and its companion bill that was passed by the House, H.R. 2182, would amend the Recovery Act to provide for enhanced state and local oversight of activities conducted pursuant to the Act. One key provision of the legislation would allow state and local governments to set aside 0.5 percent of Recovery Act funds, in addition to funds already allocated to administrative expenditures, to conduct planning and oversight. We support these efforts to provide financial support to auditors to meet their responsibilities associated with the Recovery Act. This Committee should be commended for its leadership on this matter. The single audit reporting deadline is too late to provide audit results in time for the audited entity to take action on deficiencies noted in Recovery Act programs. The Single Audit Act requires that recipients submit their Single Audit reports to the federal government no later than nine months after the end of the period being audited. As a result an audited entity may not receive feedback needed to correct an identified internal control or compliance weakness until the latter part of the subsequent fiscal year. For example, states that have a fiscal year end of June 30th have a reporting deadline of March 31st, which leaves program management only 3 months to take corrective action on any audit findings before the end of the subsequent fiscal year. For Recovery Act programs, significant expenditure of funds could occur during the period prior to the audit report being issued. The timing problem is exacerbated by the extensions to the 9 month deadline that are routinely granted by the awarding agencies, consistent with OMB guidance. For example, 13 of the 17 states in our sample have a June 30 fiscal year end and 7 of these 13 states requested and received extensions for their March 31, 2009 submission requirement of their fiscal year 2008 reporting package. The Health and Human Services Office of Inspector General (HHS OIG) is the cognizant agency for most of the states, including all of the states selected for review under the Recovery Act. According to a HHS OIG official, beginning in May 2009 HHS IG adopted a policy of no longer approving requests for extensions of the due dates for single audit reporting package submissions. OMB officials have stated that they plan to eliminate allowing extensions of the reporting package, but have not issued any official guidance or memorandum to the agencies, OIGs, or federal award recipients. In order to realize the single audit’s full potential as an effective Recovery Act oversight tool, OMB needs to take additional action to focus auditors’ efforts on areas that can provide the most efficient, and most timely, results. As federal funding of Recovery Act programs accelerates in the next few months, we are particularly concerned that the Single Audit process may not provide the timely accountability and focus needed to assist recipients in making necessary adjustments to internal controls so that they achieve sufficient strength and capacity to provide assurances that the money is being spent as effectively as possible to meet program objectives. As discussed in the previous section, OMB is currently vetting a proposed pilot project under which a limited number of voluntarily participating auditors performing the single audits for states would communicate in writing internal control deficiencies noted in the single audit within six months of the 2009 fiscal year-end, rather than the nine months required by the Single Audit Act. If the pilot is properly implemented and achieves sufficient coverage of Recovery Act-funded programs, it may be effective in addressing concerns about the timeliness of single audit reporting related to internal control weaknesses in Recovery Act programs. As of September 2, 2009, GAO's FraudNet has received 80 Recovery Act- related allegations that were considered credible enough to warrant further review. Our Forensic Audits and Special Investigations unit is pursuing 8 of these allegations, which include wasteful and improper spending, conflicts of interest, supplanting of Recovery Act funds, and contract fraud. Of the remaining 72 allegations, 12 are pending further review by GAO criminal investigators and 38 were found to not address waste, fraud, or abuse; lacked specificity; were not Recovery Act-related; or reflected only a disagreement with how Recovery Act funds are being disbursed. We consider these allegations to be resolved and no further investigation is necessary. An additional 22 allegations were referred to the appropriate agency Inspectors General for further review and investigation. We will continue to monitor these referrals and will inform the Committee when outstanding allegations are resolved. As recipients of Recovery Act funds and as partners with the federal government in achieving Recovery Act goals, states and local units of government are expected to invest Recovery Act funds with a high level of transparency and to be held accountable for results under the Recovery Act. Under the Recovery Act, direct recipients of the funds, including states and localities, are expected to report quarterly on a number of measures including the use of funds and an estimate of the number of jobs created and the number of jobs retained. These measures are part of the recipient reports required under section 1512(c) of the Recovery Act and will be submitted by recipients starting in October 2009. OMB guidance described recipient reporting requirements under the Recovery Act’s section 1512 as the minimum performance measures that must be collected, leaving it to federal agencies to determine additional information that would be required for oversight of individual programs funded by the Recovery Act, such as the Department of Energy Weatherization Assistance Program and the Department of Justice Edward Byrne Memorial Justice Assistance Grant (JAG) Program. In general, states are adapting information systems, issuing guidance, and beginning to collect data on jobs created and jobs retained, but questions remained about how to count jobs and measure performance under Recovery Act-funded programs. Over the last several months OMB met regularly with state and local officials, federal agencies, and others to gather input on the reporting requirements and implementation guidance. OMB also worked with the Recovery Accountability and Transparency Board to design a nationwide data collection system that will reduce information reporting burdens on recipients by simplifying reporting instructions and providing a user-friendly mechanism for submitting required data. OMB will be testing this system in July. In response to requests for more guidance on the recipient reporting process and required data, OMB, after soliciting responses from an array of stakeholders, issued additional implementing guidance for recipient reporting on June 22, 2009. In addition to other areas, the new OMB guidance clarifies that recipients of Recovery Act funds are required to report only on jobs directly created or retained by Recovery Act-funded projects, activities, and contracts. Recipients are not expected to report on the employment impact on materials suppliers (“indirect” jobs) or on the local community (“induced” jobs). The OMB guidance also provides additional instruction on estimating the number of jobs created and retained by Recovery Act funding. OMB’s guidance on the implementation of recipient reporting should be helpful in addressing answers to many of the questions and concerns raised by state and local program officials. However, federal agencies may need to do a better job of communicating the OMB guidance in a timely manner to their state counterparts and, as appropriate, issue clarifying guidance on required performance measurement. OMB’s guidance for reporting on job creation aims to shed light on the immediate uses of Recovery Act funding; however, reports from recipients of Recovery Act funds must be interpreted with care. For example, accurate, consistent reports will only reflect a portion of the likely impact of the Recovery Act on national employment, since Recovery Act resources are also made available through tax cuts and benefit payments. OMB noted that a broader view of the overall employment impact of the Recovery Act will be covered in the estimates generated by the Council of Economic Advisers (CEA) using a macro-economic approach. According to CEA, it will consider the direct jobs created and retained reported by recipients to supplement its analysis. Since enactment of the Recovery Act in February 2009, OMB has issued three sets of guidance—on February 18, April 3 and, most recently, June 22, 2009 —to, among other things, assist recipients of federal Recovery Act funds in complying with reporting requirements. OMB has reached out to Congress, federal, state, and local government officials, grant and contract recipients, and the accountability community to get a broad perspective on what is needed to meet the high expectations set by Congress and the administration. Further, according to OMB’s June guidance they have worked with the Recovery Accountability and Transparency Board to deploy a nationwide data collection system at www.federalreporting.gov. As work proceeds on the implementation of the Recovery Act, OMB and the cognizant federal agencies have opportunities to build on the early efforts by continuing to address several important issues. These issues can be placed broadly into three categories, which have been revised from our last report to better reflect evolving events since April: (1) accountability and transparency requirements, (2) reporting on impact, and (3) communications and guidance. Recipients of Recovery Act funding face a number of implementation challenges in this area. The act includes new programs and significant increases in funds out of normal cycles and processes. There is an expectation that many programs and projects will be delivered faster so as to inject funds into the economy, and the administration has indicated its intent to assure transparency and accountability over the use of Recovery Act funds. Issues regarding the Single Audit process and administrative support and oversight are important. Single Audit: The Single Audit process needs adjustments to provide appropriate risk-based focus and the necessary level of accountability over Recovery Act programs in a timely manner. In our April 2009 report, we reported that the guidance and criteria in OMB Circular No. A-133 do not adequately address the substantial added risks posed by the new Recovery Act funding. Such risks may result from (1) new government programs, (2) the sudden increase in funds or programs that are new to the recipient entity, and (3) the expectation that some programs and projects will be delivered faster so as to inject funds into the economy. With some adjustment, the Single Audit could be an effective oversight tool for Recovery Act programs because it can address risks associated with all three of these factors. April report recommendations: Our April report included recommendations that OMB adjust the current audit process to focus the risk assessment auditors use to select programs to test for compliance with 2009 federal program requirements on Recovery Act funding; provide for review of the design of internal controls during 2009 over programs to receive Recovery Act funding, before significant expenditures in 2010; and evaluate options for providing relief related to audit requirements for low- risk programs to balance new audit responsibilities associated with the Recovery Act. Status of April report recommendations: OMB has taken some actions and has other planned actions to help focus the program selection risk assessment on Recovery Act programs and to provide guidance on auditors’ reviews of internal controls for those programs. However, we remain concerned that OMB’s planned actions would not achieve the level of accountability needed to effectively respond to Recovery Act risks and does not provide for timely reporting on internal controls for Recovery Act programs. Therefore, in our July report, we re-emphasized our previous recommendations in this area. To help auditors with single audit responsibilities meet the increased demands imposed on them by Recovery Act funding, we recommend that the Director of OMB take the following four actions: Consider developing requirements for reporting on internal controls during 2009 before significant Recovery Act expenditures occur as well as ongoing reporting after the initial report. Provide more focus on Recovery Act programs through the Single Audit to help ensure that smaller programs with high risk have audit coverage in the area of internal controls and compliance. Evaluate options for providing relief related to audit requirements for low-risk programs to balance new audit responsibilities associated with the Recovery Act. To the extent that options for auditor relief are not provided, develop mechanisms to help fund the additional Single Audit costs and efforts for auditing Recovery Act programs. Status of Recommendations: OMB is currently vetting a proposed pilot project under which a limited number of voluntarily participating auditors performing the single audits for states would communicate in writing internal control deficiencies noted in the single audit within six months of the 2009 fiscal year-end, rather than the nine months required by the Single Audit Act. If the pilot is properly implemented and achieves sufficient coverage of Recovery Act-funded programs, it may be effective in addressing concerns about the timeliness of single audit reporting related to internal control weaknesses in Recovery Act programs. Because the sufficiency of scope and coverage from this pilot program is uncertain, we are making an additional recommendation to OMB. September recommendation: In order to achieve the objective of more timely reporting of internal control deficiencies over Recovery Act programs, the Director of OMB should take steps to achieve sufficient participation and coverage in the single audit pilot program that provides for early written communication of internal control deficiencies. Because a significant portion of Recovery Act expenditures will be in the form of federal grants and awards, the Single Audit process could be used as a key accountability tool over these funds. However, the Single Audit Act, enacted in 1984 and most recently amended in 1996, did not contemplate the risks associated with the current environment where large amounts of federal awards are being expended quickly through new programs, greatly expanded programs, and existing programs. The current Single Audit process is largely driven by the amount of federal funds expended by a recipient in order to determine which federal programs are subject to compliance and internal control testing. Not only does this model potentially miss smaller programs with high risk, but it also relies on audit reporting 9 months after the end of a grantee’s fiscal year—far too late to preemptively correct deficiencies and weaknesses before significant expenditures of federal funds. Congress is considering a legislative proposal in this area and could address the following issues: To the extent that appropriate adjustments to the Single Audit process are not accomplished under the current Single Audit structure, Congress should consider amending the Single Audit Act or enacting new legislation that provides for more timely internal control reporting, as well as audit coverage for smaller Recovery Act programs with high risk. To the extent that additional audit coverage is needed to achieve accountability over Recovery Act programs, Congress should consider mechanisms to provide additional resources to support those charged with carrying out the Single Audit act and related audits. States have been concerned about the burden imposed by new requirements, increased accounting and management workloads, and strains on information systems and staff capacity at a time when they are under severe budgetary stress. April report recommendation: In our April report, we recommended that the director of OMB clarify what Recovery Act funds can be used to support state efforts to ensure accountability and oversight, especially in light of enhanced oversight and coordination requirements. Status of April report recommendation: On May 11, 2009, OMB released a memorandum clarifying how state grantees could recover administrative costs of Recovery Act activities. Under the Recovery Act, responsibility for reporting on jobs created and retained falls to nonfederal recipients of Recovery Act funds. As such, states and localities have a critical role in identifying the degree to which Recovery Act goals are achieved. Performance reporting is broader than the jobs reporting required under section 1512 of the Recovery Act. OMB guidance requires that agencies collect and report performance information consistent with the agency’s program performance measures. As described earlier in this report, some agencies have imposed additional performance measures on projects or activities funded through the Recovery Act. April report recommendation: In our April report, we recommended that given questions raised by many state and local officials about how best to determine both direct and indirect jobs created and retained under the Recovery Act, the Director of OMB should continue OMB’s efforts to identify appropriate methodologies that can be used to (1) assess jobs created and retained from projects funded by the Recovery Act; (2) determine the impact of Recovery Act spending when job creation is indirect; (3) identify those types of programs, projects, or activities that in the past have demonstrated substantial job creation or are considered likely to do so in the future and consider whether the approaches taken to estimate jobs created and jobs retained in these cases can be replicated or adapted to other programs. Status of April report recommendation: OMB has been meeting on a regular basis with state and local officials, federal agencies, and others to gather input on reporting requirements and implementation guidance and has worked with the Recovery Accountability and Transparency Board on a nationwide data collection system. On June 22, OMB issued additional implementation guidance on recipient reporting of jobs created and retained. This guidance is responsive to much of what we said in our April report. It states that there are two different types of jobs reports under the Recovery Act and clarifies that recipient reports are to cover only direct jobs created or retained. “Indirect” jobs (employment impact on suppliers) and “induced” jobs (employment impact on communities) will be covered in Council of Economic Advisers (CEA) quarterly reports on employment, economic growth, and other key economic indicators. Consistent with the statutory language of the act, OMB’s guidance states that these recipient reporting requirements apply to recipients who receive funding through discretionary appropriations, not to those receiving funds through either entitlement or tax programs or to individuals. It clarifies that the prime recipient and not the subrecipient is responsible for reporting section 1512 information on jobs created or retained. The June 2009 guidance also provides detailed instructions on how to calculate and report jobs as full- time equivalents (FTE). It also describes in detail the data model and reporting system to be used for the required recipient reporting on jobs. The guidance provided for reporting job creation aims to shed light on the immediate uses of Recovery Act funding and is reasonable in that context. It will be important, however, to interpret the recipient reports with care. As noted in the guidance, these reports are only one of the two distinct types of reports seeking to describe the jobs impact of the Recovery Act. CEA’s quarterly reports will cover the impact on employment, economic growth, and other key economic indicators. Further, the recipient reports will not reflect the impact of resources made available through tax provisions or entitlement programs. Recipients are required to report no later than 10 days after the end of the calendar quarter. The first of these reports is due on October 10, 2009. After prime recipients and federal agencies perform data quality checks, detailed recipient reports are to be made available to the public no later than 30 days after the end of the quarter. Initial summary statistics will be available on www.recovery.gov. The guidance explicitly does not mandate a specific methodology for conducting quality reviews. Rather, federal agencies are directed to coordinate the application of definitions of material omission and significant reporting error to “ensure consistency” in the conduct of data quality reviews. Although recipients and federal agency reviewers are required to perform data quality checks, none are required to certify or approve data for publication. It is unclear how any issues identified under data quality reviews would be resolved and how frequently data quality problems would have been identified in the reviews. We will continue to monitor this data quality and recipient reporting requirements. July report recommendations: To increase consistency in recipient reporting or jobs created and retained, the Director of OMB should work with federal agencies to have them provide program-specific examples of the application of OMB’s guidance on recipient reporting of jobs created and retained. This would be especially helpful for programs that have not previously tracked and reported such metrics. Because performance reporting is broader than the jobs reporting required by section 1512, the Director of OMB should also work with federal agencies—perhaps through the Senior Management Councils—to clarify what new or existing program performance measures—in addition to jobs created and retained—that recipients should collect and report in order to demonstrate the impact of Recovery Act funding. In addition to providing these additional types of program-specific examples of guidance, the Director of OMB should work with federal agencies to use other channels to educate state and local program officials on reporting requirements, such as Web- or telephone-based information sessions or other forums. Status of July report recommendations: In recent weeks, federal agencies have issued additional guidance that builds on the OMB June 22nd recipient reporting guidance for their specific programs. This guidance is in the form of frequently asked questions (FAQs), tip sheets, and more traditional guidance that builds on what was provided on June 22. We have not assessed the sufficiency of this additional guidance at this time. Federal agencies have also taken steps to provide additional education and training opportunities for state and local program officials on recipient reporting including web-based seminars. In addition to the federal agency efforts, OMB has issued clarifications and FAQs on Recovery Act reporting requirements. OMB is also preparing to deploy regional federal employees to serve as liaisons to state and local recipients in large population centers. The objective is to provide onsite assistance and to direct questions to the appropriate federal official. OMB is also establishing a call center for entities that do not have an on-site federal liaison. Funding notification and program guidance: State officials expressed concerns regarding communication on the release of Recovery Act funds and their inability to determine when to expect federal agency program guidance. Once funds are released there is no easily accessible, real-time procedure for ensuring that appropriate officials in states and localities are notified. Because half of the estimated spending programs in the Recovery Act will be administered by nonfederal entities, states wish to be notified when funds are made available to them for their use as well as when funding is received by other recipients within their state that are not state agencies. OMB does not have a master timeline for issuing federal agency guidance. OMB’s preferred approach is to issue guidance incrementally. This approach potentially produces a more timely response and allows for mid- course corrections; however, this approach also creates uncertainty among state and local recipients responsible for implementing programs. We continue to believe that OMB can strike a better balance between developing timely and responsive guidance and providing a longer range time line that gives some structure to states’ and localities’ planning efforts. April report recommendation: In our April report, we recommended that to foster timely and efficient communications, the Director of OMB should develop an approach that provides dependable notification to (1) prime recipients in states and localities when funds are made available for their use, (2) states—where the state is not the primary recipient of funds but has a statewide interest in this information—and (3) all nonfederal recipients on planned releases of federal agency guidance and, if known, whether additional guidance or modifications are recommended. Status of April recommendation: OMB has made important progress in the type and level of information provided in its reports on Recovery.gov. Nonetheless, OMB has additional opportunities to more fully address the recommendations we made in April. By providing a standard format across disparate programs, OMB has improved its Funding Notification reports, making it easier for the public to track when funds become available. Since we issued our July report, OMB has announced that beginning August 28, OMB expects federal agencies to notify recovery coordinators in states, the District of Columbia, Commonwealths, and Territories within 48 hours of an award to a grantee or contractor in their jurisdiction. OMB has taken the additional step of disaggregating financial information, i.e., federal obligations and outlays by Recovery Act programs and by state in its Weekly Financial Activity Report. Both reports, along with agency contract and grant awardee information by location, are available on www.recovery.gov. Our recommendation: The Director of OMB should continue to implement OMB’s approach to providing easily accessible, real-time notification to (1) prime recipients in states and localities when funds are made available for their use, and (2) states—where the state is not the primary recipient of funds but has a statewide interest in this information. In addition, OMB should provide a long range time line for the release of federal guidance for the benefit of nonfederal recipients responsible for implementing Recovery Act programs. Recipient financial tracking and reporting guidance: In addition to employment related reporting, OMB’s guidance calls for the tracking of funds by the prime recipient, recipient vendors, and subrecipients receiving payments. OMB’s guidance also allows that “prime recipients may delegate certain reporting requirements to subrecipients.” Either the prime or sub-recipient must report the D-U-N-S number (or an acceptable alternative) for any vendor or sub-recipient receiving payments greater than $25 thousand. In addition, the prime recipient must report what was purchased and the amount, and a total number and amount for sub-awards of less than $25 thousand. By reporting the DUNS number, OMB guidance provides a way to identify subrecipients by project, but this alone does not ensure data quality. The approach to tracking funds is generally consistent with the Federal Funding Accountability and Transparency Act (FFATA). Like the Recovery Act, the FFATA requires a publicly available Web site— USAspending.gov—to report financial information about entities awarded federal funds. Yet, significant questions have been raised about the reliability of the data on USAspending.gov, primarily because what is reported by the prime recipients is dependent on the unknown data quality and reporting capabilities of their subrecipients. For example, earlier this year, more than 2 years after passage of FFATA, the Congressional Research Services (CRS) questioned the reliability of the data on USAspending.gov. We share CRS’s concerns associated with USAspending.gov, including incomplete, inaccurate, and other data quality problems. More broadly, these concerns also pertain to recipient financial reporting in accordance with the Recovery Act and its federal reporting vehicle, www.FederalReporting.gov, currently under development. Our recommendation: To strengthen the effort to track the use of funds, the Director of OMB should (1) clarify what constitutes appropriate quality control and reconciliation by prime recipients, especially for subrecipient data, and (2) specify who should best provide formal certification and approval of the data reported. Agency-specific guidance: DOT and FHWA have yet to provide clear guidance regarding how states are to implement the Recovery Act requirement that economically distressed areas are to receive priority in the selection of highway projects for funding. We found substantial variation both in how states identified areas in economic distress and how they prioritized project selection for these areas. As a result, it is not clear whether areas most in need are receiving priority in the selection of highway infrastructure projects, as Congress intended. While it is true that states have discretion in selecting and prioritizing projects, it is also important that this goal of the Recovery Act be met. Our recommendation: To ensure states meet Congress’s direction to give areas with the greatest need priority in project selection, the Secretary of Transportation should develop clear guidance on identifying and giving priority to economically distressed areas that are in accordance with the requirements of the Recovery Act and the Public Works and Economic Development Act of 1965, as amended, and more consistent procedures for the Federal Highway Administration to use in reviewing and approving states’ criteria. We received comments on a draft of our July report from the U.S. Office of Management and Budget (OMB) and the U.S. Department of Transportation (DOT) on our report recommendations. U.S. Office of Management and Budget: OMB concurs with the overall objectives of our recommendations made to OMB in our report. OMB offered clarifications regarding the area of Single Audit and did not concur with some of our conclusions related to communications. What follows summarizes OMB’s comments and our responses. OMB agreed with the overall objectives of our recommendations. OMB also noted it believes that new requirements for more rigorous internal control reviews will yield important short-term benefits and the steps taken by state and local recipients to immediately initiate controls will withstand increased scrutiny later in the process. OMB is vetting a proposed pilot project under which a limited number of voluntarily participating auditors performing the single audits for states would communicate in writing internal control deficiencies noted in the single audit within six months of the 2009 fiscal year-end, rather than the nine months as required by the Single Audit Act. In recent discussions about the pilot program, OMB officials agreed that sufficient coverage of Recovery Act-funded programs will be needed to address concerns about the timeliness of single audit reporting related to internal control weaknesses in Recovery Act programs. OMB commented that it has already taken and is planning actions to focus program selection risk assessment on Recovery Act programs and to increase the rigor of state and local internal controls on Recovery Act activities. OMB issued guidance in early August 2009 that provides additional guidance for programs with expenditures of Recovery Act funds. OMB has taken steps to achieve audit coverage of Recovery Act programs. However, smaller, but significantly risky programs under the Recovery Act may not receive adequate attention and scrutiny under the current Single Audit process. OMB acknowledged that acceleration of internal control reviews could cause more work for state auditors, for which OMB and Congress should explore potential options for relief. States volunteering to participate in OMB’s proposed pilot program will be granted some relief in workload because the auditor will not be required to perform risk assessments of smaller federal programs. OMB has made important progress relative to some communications. In particular, we agree with OMB’s statements that it requires agencies to post guidance and funding information to agency Recovery Act websites, disseminates guidance broadly, and seeks out and responds to stakeholder input. In addition, OMB has held a series of interactive forums to offer training and information to Recovery Act recipients on the process and mechanics of recipient reporting and they could also serve as a vehicle for additional communication. Finally OMB has improved Funding Notification reports by providing a standard format across disparate programs, making it easier for the public to track when funds become available. OMB recently established an approach for notifying key state officials no later than 48 hours after an award is made within their state. Although it is too soon to tell, this latest effort may provide the real-time notification we recommend. We will continue to monitor the situation and will report on the effectiveness of OMB’s approach in a future report. Moving forward and building on the progress it has made, OMB can take the following additional step to provide a long range time line for the release of federal agency guidance. In an attempt to be responsive to emerging issues and questions from the recipient community, OMB’s preferred approach is to issue guidance incrementally. Since our July report, OMB has issued periodic FAQs as an approach to clarifying existing OMB guidance and providing additional information. This approach potentially produces a more timely response and allows for mid- course corrections; however, this approach also creates uncertainty among state and local recipients. State and local officials expressed concerns that this incremental approach hinders their efforts to plan and administer Recovery Act programs. As a result, we continue to believe OMB can strike a better balance between developing timely and responsive guidance, such as its FAQs, and providing some degree of a longer range time line so that states and localities can better anticipate which programs will be affected and when new federal agency guidance is likely to be issued. OMB’s consideration of a master schedule and its acknowledgement of the extraordinary proliferation of program guidance in response to Recovery Act requirements seem to support a more structured approach. We appreciate that a longer range time line would need to be flexible so that OMB and federal agencies could also continue to issue guidance and clarifications in a timely manner as new issues and questions emerge. Mr. Chairman, Senator Collins, and Members of the Committee this concludes my statement. I would be pleased to respond to any questions you may have. For further information on this testimony, please contact J. Christopher Mihm, Managing Director for Strategic Issues, on (202) 512-6806 or mihmj@gao.gov. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony is based largely on GAO's July 8, 2009 report, in response to a mandate under the American Recovery and Reinvestment Act of 2009 (Recovery Act). This testimony provides selected updates, including the status of federal Recovery Act outlays. The report addresses: (1) selected states' and localities' uses of Recovery Act funds, (2) the approaches taken by the selected states and localities to ensure accountability for Recovery Act funds, and (3) states' plans to evaluate the impact of Recovery Act funds. GAO's work for the report is focused on 16 states and certain localities in those jurisdictions as well as the District of Columbia--representing about 65 percent of the U.S. population and two-thirds of the intergovernmental federal assistance available. GAO collected documents and interviewed state and local officials. GAO analyzed federal agency guidance and spoke with Office of Management and Budget (OMB) officials and with program officials at the Centers for Medicare and Medicaid Services, and the Departments of Education, Energy, Housing and Urban Development, Justice, Labor, and Transportation. Across the United States, as of August 28, 2009, Treasury had outlayed about $45 billion of the estimated $49 billion in Recovery Act funds projected for use in states and localities in fiscal year 2009. More than three quarters of the federal outlays have been provided through the increased Medicaid Federal Medical Assistance Percentage (FMAP) and the State Fiscal Stabilization Fund (SFSF) administered by the Department of Education. GAO's work focused on nine federal programs that are estimated to account for approximately 87 percent of federal Recovery Act outlays in fiscal year 2009 for programs administered by states and localities. All 16 states and the District have drawn down increased Medicaid FMAP grant awards of just over $19.6 billion for October 1, 2008, through September 4, 2009, which amounted to almost 84 percent of such funds available to them. All states and the District experienced enrollment growth in this period. Several states noted that the increased FMAP funds were critical in their efforts to maintain coverage at current levels. States and the District reported they are planning to use the increased federal funds to cover their increased Medicaid caseload and to maintain current benefits and eligibility levels. As of September 1, the Department of Transportation (DOT) had obligated approximately $11 billion for almost 3,800 highway infrastructure and other eligible projects in the 16 states and the District and had reimbursed these 17 jurisdictions about $604 million. Across the nation, almost half of the obligations have been for pavement improvement projects because they did not require extensive environmental clearances, were quick to design, obligate and bid on, could employ people quickly, and could be completed within 3 years. Officials from most states considered project readiness, including the 3-year completion requirement, when making project selections and only later identified to what extent these projects fulfilled the economically distressed area requirement. We found substantial variation in how states identified economically distressed areas and how they prioritized project selection for these areas. FHWA issued clarifying guidance to address our recommendation in August 2009. As of September 1, 2009, the District and 15 of the 16 states covered by our review had received approval from Education for their initial SFSF funding applications. Pennsylvania had submitted an application to Education, but it had not yet been approved. As of August 28, 2009, Education has made $21 billion in SFSF grants for Education available to the 15 states and the District--of which over $7.7 billion had been drawn down as of August 28, 2009. School districts said they would use SFSF funds to maintain current levels of education funding, particularly for retaining staff and current education programs. They also told us that SFSF funds would help offset state budget cuts. Overall, states reported using Recovery Act funds to stabilize state budgets and to cope with fiscal stresses. The funds helped them maintain staffing for existing programs and minimize or avoid tax increases as well as reductions in services. States have implemented various internal control programs; however, federal Single Audit guidance and reporting does not fully address Recovery Act risk. The Single Audit reporting deadline is too late to provide audit results in time for the audited entity to take action on deficiencies noted in Recovery Act programs. Moreover, current guidance does not achieve the level of accountability needed to effectively respond to Recovery Act risks. Direct recipients of Recovery Act funds, including states and localities, are expected to report quarterly on a number of measures, including the use of funds and estimates of the number of jobs created and retained.
The District of Columbia Public Schools’ draft Long-Range Facilities Master Plan, dated July 17, 1997, states that the majority of District public schools were built over 50 years ago, generally have not been well maintained, and consequently, substantial deferred maintenance exists. In addition, concerns about safety and problems with leaky school roofs have been widely reported. We have documented the less-than-adequate condition of the District’s public schools in several reports. In 1992, Parents United for the District of Columbia, an education advocacy group, filed a lawsuit in the Superior Court of the District of Columbia naming several city officials and alleging their failure to perform their duties with respect to the D.C. public schools, including but not limited to, their duties related to hundreds of fire code violations in aging D.C. school buildings. In an effort to respond to these concerns, the Congress included legislative provisions on this matter in recently enacted legislation: Secs. 2550-2552 of the District of Columbia School Reform Act of 1995, called for the Administrator of the General Services Administration (GSA) to provide technical assistance to the District public schools in the area of facilities management and for the Mayor and the District of Columbia Council, in consultation with the Administrator of GSA, the Financial Responsibility and Management Assistance Authority (Authority), the Board of Education, and the Superintendent of Schools, to design and implement a comprehensive long-term program for the repair, improvement, maintenance, and management of District public school facilities and to designate or establish an agency within the District of Columbia government to administer the program. The plan also was required to include short-term and long-term funding sources. Section 603(e)(2)(A) of the Departments of Labor, Health and Human Services, and Education, and Related Agencies Appropriations Act, 1997,authorized the Authority to establish an account to receive the proceeds from privatization of certain government entities to carry out the District of Columbia School Reform Act of 1995 (which provides for the repairs and improvement of District schools) and to finance public elementary and secondary school facility construction and repair within the District of Columbia. Section 5201 of the Omnibus Consolidated Appropriations Act of 1997 authorized the Authority to contract with private entities to carry out a program of school facility repair of District public schools, in consultation with GSA. On November 15, 1996, the Authority restructured DCPS, installing a nine-member Emergency Transitional Education Board of Trustees and a Chief Executive Officer (CEO), both as Agents of the Authority. The Authority also delegated its authorities to oversee all facilities and property to the new Board of Trustees. The Authority removed the then Superintendent of Schools and gave the CEO responsibility for all the authorities, powers, functions, exemptions, and immunities of the former Superintendent. The CEO established an office of Chief Operating Officer (COO)/Director of Facilities and hired a COO in January 1997 to manage and implement the school facilities improvement program. To assist in this effort, GSA updated a study, by developing a comprehensive facilities revitalization plan, Determination and Prioritization of the District of Columbia Public Schools Projects, which was delivered to DCPS on February 18, 1997. The plan described problems such as leaky roofs, inoperable boilers, numerous fire code violations, and the absence of a long-range facilities master plan and estimated the cost of upgrading the school infrastructure to be $2 billion. The February 1997 plan and the underlying work were the basis for the long-range facilities master plan. To develop the long-range facilities master plan, a task force was formed including representatives from DCPS, the Office of the Mayor, and the 21st Century School Fund. A February 28, 1997, draft report of the long-range plan was submitted to the D.C. Council in February, and was resubmitted with changes in April, and again in July. The Council did not vote on the plan, and DCPS submitted it to the Congress to meet the congressionally mandated submission date of April 25, 1997. The draft long-range facilities master plan considered roof replacement to be the number one priority. GSA contracted for and managed roof work at 10 schools—initially 7 schools at the Authority’s request. In June 1997, DCPS requested GSA’s assistance, and GSA managed work on an additional three schools. DCPS oversaw work on another 51 schools for which roof work was completed in fiscal year 1997. Our objectives were to determine (1) when funds were made available to pay for roof repairs, (2) the cost of the roof repairs, including the cost per square foot, and (3) whether there are additional roofs to be repaired in fiscal year 1998 and beyond. To determine when the capital funds were available to pay for roof repairs, we reviewed documents provided by the U.S. Department of Education, Authority, District CFO’s office, and DCPS CFO. In addition, we reviewed funding request modification documents prepared by DCPS and approved by the District’s Office of Budget and Planning, monthly reports produced by the District’s Financial Management System, and other financial documents provided by DCPS. To determine the cost of the roof repairs, we obtained and reviewed information from the contract files at DCPS for fiscal year 1997 projects, which included information on each school, such as the dollar amount and other terms of each contract, types of roofing material used, size of the area replaced/repaired, modifications (change orders), daily inspection sheets, invoices submitted for payment and actual amounts paid to contractors. In addition, we compared design and construction cost estimates prepared by a DCPS engineering consultant and GSA to the contract amount and change orders for the schools’ roofs replaced/repaired. We held discussions with DCPS officials to obtain reasons for any significant variances from the cost estimates. We also interviewed District Government officials, including officials from the Authority, the Chief Financial Officer for the District, the Deputy Chief Financial Officer for the District’s Office of Budget and Planning, the Chief Operating Officer of DCPS and his Capital Project Division staff, the Chief Financial Officer of DCPS, and District Council officials. In addition, we interviewed officials from the General Services Administration, the U.S. Department of Education, a DCPS consultant, Parents United, and the 21st Century School Fund to obtain additional information to satisfy our objectives. To determine whether additional roofs required repairs, we reviewed DCPS’ fiscal year 1997 Capital Improvement Program priority lists of schools needing roof work and various facility assessments prepared by contractors, and we discussed modifications/changes to the plans with DCPS officials. We also reviewed the DCPS’ proposed Capital Improvement Program Plan for fiscal years 1999-2004, including roof replacement prioritization schedules, to determine the extent of roofing repair projects planned for fiscal year 1998 and future years. While we reviewed the information contained in the contract files to determine the cost per square foot of roofs replaced/repaired, we did not independently verify the accuracy of the square footage estimates but instead relied on the measurements prepared by GSA and DCPS engineering consultant. We did not review support for payments made to contractors to determine validity nor did we attempt to determine whether the cost of individual projects was reasonable. We reviewed the work performed by the District’s independent public accounting firm on DCPS capital project funds. We requested comments on a draft of this report from the Authority, DCPS, the District’s CFO, GSA, and the U.S. Department of Education. Written comments were received from the Authority, DCPS, and GSA and are reprinted in appendixes III, IV, and V, respectively. Oral comments were obtained from the District’s CFO and the Department of Education. Those comments have been considered and incorporated in our report as appropriate. We conducted our work from October 1997 through February 1998 in accordance with generally accepted government auditing standards. Based on our review of the information obtained from the Authority, the District’s Chief Financial Officer, the Department of Education, and the District of Columbia Public Schools’ Chief Financial Officer, funds were available to begin roof repairs on June 20, 1997, when D.C. Public Schools closed for the summer vacation. Table 1 shows the sources, dates, and amounts of funds received by the Authority. By June 1997, the Authority had received on behalf of DCPS a total of $49.7 million in capital funds, as follows: $11.5 million in October 1996 from fiscal year 1996 general obligation bond proceeds, approximately $18 million in March 1997 from the federal government’s sale of the College Construction Loan Insurance Association (Connie Lee), and $20 million from the June 1997, general obligation bond proceeds. In addition, in September 1997, the Authority received about $36.8 million from the sale of Student Loan Marketing Association (Sallie Mae) stock warrants, making the total received in fiscal year 1997 for capital projects about $86.5 million. Prior to DCPS assuming responsibility for managing the fiscal year 1997 capital program work, the Authority had engaged GSA to oversee roof repair and other work, such as installing boilers and chillers. On November 19, 1996, the Authority entered into a memorandum of agreement with GSA to provide contract administration and program management services for those contracts. On November 27, 1996, GSA issued a task order to an architectural and engineering consultant (DMJM) for design work related to five schools. In February 1997, construction work began on those five schools. According to GSA and DCPS officials, the $11.5 million that the Authority had received in October 1996 was earmarked for GSA-managed contracts. According to DCPS’ Chief Operating Officer (COO), when he assumed his position in January 1997, neither funds nor technical capital project staffwere available to prepare or manage the preparation of scope of work, drawings, and cost estimates. While the Authority records showed that additional funds were available in March 1997, the COO stated that he began to hire technical capital staff to address capital program needs in April 1997 after being told that funds were available. We were not provided any documentation indicating when DCPS was notified that additional funds were available for capital projects on the school facilities. In its audit report on the District’s financial statements for fiscal year 1997, the District’s independent auditors identified a material weakness concerning control over transactions involving the Authority. The report indicated that the District has not developed adequate procedures to account for funds held by the Authority and does not effectively reconcile the amounts which are recorded. The auditor noted that the District and the Authority have not developed procedures to notify each other of amounts anticipated or actually received by the Authority on behalf of the District. On May 19, 1997, DCPS issued a Request for Qualifications (RFQ) for capital projects it intended to manage, which resulted in prequalification of nine contractors. In June 1997, DCPS authorized consulting architectural and engineering firm, DMJM, which had a competitively bid contract with GSA, to provide scope of work for roof replacement at 48 schools. This work was performed from the beginning of June to mid-July and included surveying each roof, reviewing and photographing existing conditions, and developing technical specifications to establish quality standards and a cost estimate. On July 1, 1997, DCPS issued an Invitation for Bid and Contract (IFBC) for a single (or package) contract for roof replacement at 15 schools and for work on boilers and chillers at five schools. DCPS officials told us that they were not initially successful in obtaining bidders because contractors were hesitant to bid on such a large package, involving such diversity of work. On July 11, 1997, DCPS issued an addendum to the IFBC, resulting in eight separate, smaller packages, two of which included the boiler and chiller work. The other six included roof replacements on 48 schools. Contracts for two of those six packages (15 schools) were awarded. The remaining four packages (33 schools) were reissued as another addendum covering 23 schools. The remaining 10 schools were deferred at that time. Of these 10 schools, 2 were repaired by DCPS in-house maintenance staff. The addendum for the 23 schools allowed prequalified contractors to bid on one or more of those schools; work on 19 schools was awarded on that basis for a total of 34 schools under contract. Roof work for the remaining 12 DCPS managed projects completed during fiscal year 1997 included 3 from the original IFBC and 9 others. DCPS officials told us they urged contractors to submit bids. Based on our analysis of contract documents, the majority (46 schools) of the roof repair work started the third week in July or later. The draft Long-Range Facilities Master Plan called for roof replacement work at 50 schools. According to the COO, when the Plan was presented at the end of February 1997, he had believed that the work could not be completed until the end of October 1997 but had hoped that a substantial number of schools could be completed prior to September 30, 1997. The COO advised us that on July 10, 1997, he had informed the Superior Court that the estimated completion dates based on the best available data, ranged from mid-August 1997 through September 20, 1997. He said that these estimates did not consider the July 11, 1997, court ruling that this type of work could not be performed while schools were occupied. Ultimately, because of the large number of schools involved, it was decided to delay the opening of D.C. public schools until September 22, 1997. DCPS records show that as of February 4, 1998, the total cost of the fiscal year 1997 roof repair project, including change orders and consulting fees, was about $37 million. A significant, but not determinable amount of these costs was attributable to factors other than what would be strictly interpreted as roof replacement/repair work. Among these were structural integrity, fire damage, the general deterioration from deferred maintenance, and warranty stipulations concerning deferred maintenance. Extensive work was performed to repair and replace masonry, cornices, flashing, coping, and cupolas, as well as cleaning drains. For ease of presentation, we have characterized this work as roof and roof-related work. Based on our review and analysis of the data, the average cost per square foot for roof repair work performed on schools managed by both DCPS and GSA in fiscal year 1997 was about $20 per square foot—with costs at individual schools ranging from about $4 to $77. The average cost per square foot for GSA-managed contracts was about $13, whereas the average cost per square foot for DCPS-managed contracts was about $22 per square foot. As part of its fiscal year 1997 Capital Program budget, DCPS had initially budgeted $22 million for roof work to be performed in fiscal year 1997. According to DCPS officials, the $22 million was a preliminary estimate and did not include amounts for work such as repairing flashing, masonry, or cornices. In addition, the $22 million did not include costs to address the complexity of the roof areas and other issues discussed below, such as the compressed time schedule. Further, the priority list of schools on which the $22 million estimate was based was modified several times during fiscal year 1997. DCPS officials were aware that they would have to pay a premium for labor and materials because of the various factors that affected costs. Table 2 summarizes the work performed, cost per square foot, and other information for the roof work managed by both DCPS and GSA. In total, roof work was completed at 61 schools. DCPS capital project staff managed roof projects at 46 schools, and its in-house maintenance staff performed minor work at 7 schools (Cardozo Senior High, Cleveland Elementary, Eaton Elementary, Eliot Junior High, Hart Junior High, Janney Elementary, and Winston Elementary). GSA managed roof projects at 10 schools. Included were two schools (Tyler and Spingarn) where DCPS and GSA managed separate projects. Table 2 does not include data for minor work performed at the seven schools because the cost data were not complete. Accordingly, that work, which DCPS officials estimated to have cost about $189,000, is not included in our computations of total cost or cost per square foot. Table 2 indicates a wide range of costs per square foot by school and by responsible agency (DCPS or GSA). The roofs worked on by DCPS contractors had square foot costs ranging from a low of $4.19 (Ketcham Elementary) to a high of $77.27 (Cook Elementary) per square foot. In contrast, costs for schools worked on by GSA’s contractors ranged from a low of $10.10 per square foot (Shadd Elementary) to a high of $27.43 per square foot (Spingarn Gym, where, according to GSA officials, as a result of a fire, a new roof deck and supporting structure were installed and a significant amount of asbestos was removed). DCPS officials provided various explanations for the wide range in costs per square foot among schools such as Cook Elementary ($77.27), MacFarland Junior High School ($64.45), and Ketcham ($4.19). According to DCPS’ officials, less than 20 percent of Cook’s total cost pertains to roof replacement. The majority of the cost was due to repairing an ornamental cornice around most of the building just below the roof level. The cornice had deteriorated and portions of it were at risk of falling off; therefore, Cook was considered a major safety concern. In addition, the cornice had to be repaired from a crane. Further, DCPS stated that much work was done to repair the skylight and to repair coping with new stainless steel covering. According to DCPS officials, work at MacFarland Junior High was awarded to the low bidder of a package, covering nine schools. DCPS officials and engineering consultants stated that large amounts of masonry repair (repointing and replacement of broken brick), installation of metal panels on high parapet walls, and skylight repair were performed. The engineers’ original scope of work describes badly deteriorated mortar joints, broken brick, and severely cracked parging on parapet walls—with resulting leaks. In addition, according to DCPS, repairs were performed on the flashing; the stone coping was replaced; and the drain was cleaned. On the other hand, Ketcham was awarded at the low end. According to DCPS officials, the contractor did not give full consideration to the condition of the roof or the complexity of the work to be done. Several factors contributed to the costs being considerably higher than what GSA officials stated has been their experience for roofing work in the Washington, D.C., metropolitan area. GSA’s estimates ranged from $8 to $10 per square foot and reflect work required to repair and renovate typical flat, large, built-up roof systems that generally have had a good repair record. However, a combination of factors resulted in substantially higher per square foot cost for the D.C. Public Schools. Among these are the compressed schedule under which most of the 1997 roof work was performed; the diversity and complexity of the roofs on the D.C. public school buildings; the extensive deferred maintenance and other roof-related work, including additional work required to secure the long-term warranties from materials suppliers and contractors; and other factors such as the District’s history of paying vendors. DCPS-managed work was completed within extremely narrow time frames. This tight schedule was caused by the lack of (1) technical capital project staff, (2) advance project planning to provide an adequate basis for seeking bids, and (3) the fast approaching opening of schools slated for September 2, 1997. This situation resulted in DCPS scrambling to get contractors in what they found to be a tight summer market and selecting an approach that while faster for getting the work done on time, could have been more costly. To accelerate the roof work, DCPS relied exclusively on the design-build approach versus the traditional method. Under the traditional method, management separately performs or contracts for project design to provide the drawings, specifications, reports, and other materials needed to obtain bids for the actual repair work. Thus, separate procurements are involved in first designing and then contracting for the renovation work. This approach tends to stretch out the time frame, but provides a great measure of detail to the prospective bidder, thus lowering the risk. In contrast, the design-build method involves the winning bidder providing both the design and performing the renovation work. One of the primary advantages of using the design-build approach is that the project can be completed in a shorter time frame because the design phase can be done concurrently with the construction phase. However, since the contractor assumes more risk for the job under the design-build approach because of unforeseen difficulties, the costs can be higher. Given the level of deferred maintenance and the limited time available both for submitting bids and performing the work, it would appear that the risk assumed was substantial. GSA’s earlier involvement allowed it an average of 67 days to complete its 10 projects. In contrast, all of the DCPS-managed work was completed in well under the 67-day average of GSA’s work, with the longest project taking 50 days and the average being 36 days. The shortest DCPS project took 3 days. Despite taking less time, our analysis of the data on table 2 shows that the DCPS-managed work involved more roof areas and, as discussed later in greater detail, more complex work. GSA was able to secure contracts earlier in the year as it stated when the market was not saturated with roof work, which typically results in lower cost. Similarly, neighboring school systems in the Washington, D.C., metropolitan area pointed out that they did not typically attempt to complete roofing projects in the short time frames accomplished by DCPS during 1997. According to a Montgomery County Public Schools roofing specialist, roof replacement work would typically be done over the full summer session, from about June 20 to August 31. In addition, according to the Fairfax County Public Schools engineer, contracts are usually awarded in the early part of the year for work to begin in June and they normally operate on a 2-year planning horizon. The Fairfax County Public Schools Director of Design and Construction also told us that depending on the size of the building and material used, a roofing replacement can take from 6 weeks to 6 months. The Fairfax County Public Schools engineer further stated that the cost is generally 20 to 30 percent higher when a project is put out for bid in the summer. DCPS was unsuccessful in obtaining bids on a larger package advertised on July 1, 1997, for 15 schools and subsequently repackaged all planned work into 8 smaller packages, which went out in mid-July. DCPS officials advised us that they actively solicited bids to get the work performed and that 2 out of 16 vendors involved were from outside the Washington, D.C., metropolitan area, including one brought in purposely to handle the clay tile roof project at Bancroft Elementary. DCPS also used a sole source procurement in fiscal year 1997 for one project, which it performed on an emergency basis. Work was completed in 18 days, involving extensive overtime. DCPS officials advised us that the Langdon Elementary School project was initiated after the DCPS Quality Assurance Task Force identified a potential structural problem shortly before school was to open. Work started on September 9, and was substantially completed on September 27, 1997, at a cost of $32.99 per square foot. While a common denominator of much of this work was the premium time (labor costs) involved, DCPS officials told us that they did not believe they had any clear alternatives. According to the COO, it could not cut back on the number of schools or the scope of work at those schools because of the court’s mandate regarding fire code violations. GSA and the DCPS engineering and architectural consultant agreed that DCPS roof renovation work was not typical since the roofs were diverse and complex and had significantly deteriorated. According to DCPS officials and the DCPS engineering consultant, the diversity and complexity of the roofs on the schools resulted in higher costs. These officials stated that the roofs were not generally the typical flat roofs used on more recently built schools but instead are made up of multiple roof areas and materials. To illustrate, Fairfax and Montgomery County school engineers pointed out that 90 percent of their roofs are generally flat, and use modified bitumen. In contrast, 18 of the 56 DCPS and GSA-managed projects worked on during fiscal year 1997 involved two types of material, such as modified bitumen and slate, and 7 involved three types of roofing material. Inherent in these contrasts are that the newer suburban structures have larger, flat, easier and safer-to-work on surfaces versus DCPS often smaller and sloped surfaces using metal and slate. The number of roof areas is also a factor. The number of roof areas that were replaced/repaired at each school ranged from 1 (at Leckie Elementary) to as many as 37 roof areas (at Dunbar Senior High School). Forty had 6 or more areas repaired; 25 had at least 10; and 6 had 20 or more. (Appendix II illustrates a typical District of Columbia public school roof, where multiple roof areas were replaced/repaired. It also highlights some of the technical features, including cupolas and skylights.) According to the DCPS engineering consultant, different types of roofing specialists were required to address the diversity of the roofs. The material that was most frequently used to replace these roofs was two-ply modified bitumen. Table 2 reveals that in addition to two-ply modified bitumen, a variety of materials were used to repair the roofs, such as slate tiles, clay tiles, metal, asphalt shingle, and fiberglass asphalt. Some materials are more expensive than others. Metal and slate roofs are commonly considered more expensive than a modified bitumen roof. In addition, DCPS officials stated that a subcontractor was brought in from another state to repair clay tiles since no local firm was available at the time work had to be completed. In recent years, it has been widely documented that the majority of DCPS roofs were badly deteriorated because maintenance had been deferred for many years. DCPS officials stated that the $22 million, which was budgeted for roof repairs at the beginning of fiscal year 1997 did not assume funding for deferred maintenance and the 20-year manufacturers’ warranties. The manufacturers’ warranties were conditional on certain deferred maintenance and other roof-related work being done. Table 2 reveals that for the majority of the schools, a substantial amount of roof-related or deferred maintenance work was performed. For instance, common roof-related work included replacing skylights and gutters, repairing coping and flashing, repointing masonry, and cleaning drains. In addition, many roofs required tapered insulation, resealing or repointing of parapets, and structural reinforcement of the roof to redirect the water flow. According to DCPS officials, many of the roofs and supporting structures had to be completely replaced because they were badly deteriorated and beyond patching. They stated that patching would have been only a short-term solution to a long-standing problem. For example, Spingarn Senior High School repairs averaged $36.18 per square foot because of the major structural work required. DCPS officials informed us that the entire slate roof was badly deteriorated and that daylight could be seen from inside the attic. Slate on 14 roof areas was replaced. To support the new slate, new wood blocking was required and 700 feet of new coping was installed. In addition, we were told that numerous roof expansion joints were repaired and that the triangular pediment over the colonnade at the front entrance was also repaired. The bid solicitation process used in the replacement of DCPS roofs required contractors to provide 2-year guarantees on workmanship and 20-year manufacturers’ warranties on materials. DCPS officials stated that the deferred maintenance work was necessary to obtain the guarantees/warranties that they had required. According to DCPS officials, manufacturers perform site inspections to ensure that the roofs are installed according to their design specifications and that factors, such as flashing and caulking, which can contribute to premature roof failure, are up to industry standards. DCPS officials told us that as of January 26, 1998, it had received 20-year manufacturers’ warranties for 44 roof projects and 2-year contractor guarantees for 35 roof projects. DCPS officials also stated that while some of the school roofs that were replaced this summer may have had existing warranties, they believe that since the roofs were not well maintained and protected, DCPS would not have prevailed in a warranty claim. For example, the officials cited numerous cases in which inspections of leaky roofs disclosed that large amounts of debris, or even mattresses, had been allowed to accumulate. To the extent that such items retain water, they keep the roof surface saturated, thus accelerating deterioration of the roof membrane and substrate. The District had a well-publicized poor payment history in recent years. For example, in fiscal years 1994, 1995, and 1996, the District delayed payments owed to vendors and Medicaid providers because it had cash flow problems. Consequently, contracting firms have expressed reluctance to do business with the District, and this, according to DCPS officials, became quite evident in the summer of 1997 when it issued its invitation for bids. Contractors were particularly reluctant to submit bids for large contracts (packages), fearing that DCPS would not be able to honor its obligations. Therefore, according to DCPS officials, contractors had to be urged to submit proposals, which DCPS officials believe could have resulted in DCPS paying a higher than normal cost to repair the roofs. Given the nature of the work and the circumstances involved, the costs have not differed significantly from what was expected before contracting for this work. The aggregate estimated cost for the roof work managed by both GSA and DCPS in fiscal year 1997 was approximately $31.7 million, about 3.5 percent less than the $32.7 million contract amounts. As of February 4, 1998, DCPS had provided us with change orders totaling about $2 million, which brings the preliminary total to about $34.7 million, or about 10 percent over the consultants’ cost estimates. In addition, DCPS incurred about $2.1 million for consulting, contract administration, and construction management fees. Prior to contracting out the roof work, DCPS had engaged an architectural and engineering firm, with whom GSA had a contract under which it could issue task orders, to develop cost estimates of the roof replacement/repair work. Almost all estimates were prepared by one of two architectural and engineering consultants, and in a few instances DCPS or GSA staff worked with contractors to prepare estimates. Estimates were based on field observations to determine existing conditions and the specific location and extent of required work, and included diagrams (and, for most schools, photographs) of each roof, narrative descriptions, quality specifications of material to be installed, and a cost estimate for each school. As of February 4, 1998, DCPS had received proposals for change orders pertaining to 27 schools for a total of about $2 million. In most cases, the proposals resulted from requiring additional work beyond the original scope of work, such as structural repairs of decks and work to clean or replace drains, flashing, and coping. About 60 percent, or $1.2 million, of the change orders are associated with additional costs at two schools, Browne Junior High and Roosevelt Senior High. About 35 percent of this $1.2 million was a result of premium labor rates required to accelerate the work, and the remainder was primarily for additional masonry work, installation of a new metal roof, and drain and gutter repairs. As of February 4, 1998, the DCPS Capital Improvement Program budget indicates that about $35 million is expected to be spent on 40 school roof projects in fiscal year 1998. According to the DCPS COO, DCPS has about $41.8 million available to enable it to get an early start with the procurement process. According to DCPS officials, on October 31, 1997, they engaged an engineering consultant to (1) identify the scope of work and (2) develop cost estimates. The scope of work and cost estimates for 12 schools were completed in fiscal year 1997. DCPS officials told us that as of February 27, 1998, the engineering consultant had inspected an additional 19 school roofs and developed scope of work and cost estimates that reflect direct labor and materials costs and other costs, such as overhead, general conditions, bond and insurance, and contingencies. According to DCPS officials, scope of work and cost estimates for the remaining nine schools will be prepared in May 1998. DCPS officials informed us that as of November 3, 1997, they had completed roof repair work on five schools for which the scope of work and cost estimates had been completed in fiscal year 1997. DCPS officials anticipate that roof repair work at the remaining 35 schools will begin in the spring and will be completed during the summer 1998 recess. Because the lawsuit from which the court ruling on performing roof work while the schools are occupied has been settled, DCPS expects to be able to work during the school year using similar precautions as are employed in neighboring school jurisdictions. It advised us that in the event of emergency roof repairs, DCPS has a plan that involves relocating students so that the necessary work can be completed during the school year. This earlier start than for fiscal year 1997 should allow more time to have roof work conducted under normal conditions, possibly resulting in lower costs to the District Government. The District of Columbia Public Schools proposed Capital Improvements Plan for fiscal years 1999-2004 indicates that an additional $63 million in roof replacement is anticipated during this period. According to a Facilities Planning, Programming and Quality Assurance Division official, the $63 million projection is an estimate for budget and planning purposes and the amount is not associated with particular schools. DCPS expects to use proceeds from the sale of schools to help finance fiscal year 1998 and later school projects. Section 5206(a) of the Omnibus Consolidated Appropriations Act, 1997, authorizes the Authority to dispose of certain school property and deposit the proceeds in the Board of Education Real Property Maintenance and Improvement Fund. Currently, DCPS has 45 closed schools, which it intends to either sell, lease, lease with the option to buy, or develop as public/private partnerships. DCPS sold 1 school in the fall of 1997 and expects to generate $20 million from the sale of an additional 15 schools in fiscal year 1998. In addition, the Authority has agreed to commit a minimum of 27.5 percent of the District’s general fund long-term financing authority (annual bond proceeds) toward completion of the repairs required by the Long Range Facilities Master Plan. We received comments from the Authority, the District’s Chief Financial Officer, DCPS, GSA, and the U.S. Department of Education on a draft of this report. Written comments from the Authority, DCPS, and GSA are reprinted in appendixes III, IV, and V, respectively. Those commenting generally agreed with the facts presented in this report. The Authority noted that most of the significant events and time frames outlined in the report are consistent with its records. DCPS stated that our major findings on the cost and conduct of the 1997 upper building stabilization program are accurate. The District’s CFO, GSA and the U.S. Department of Education agreed with the report as related to their respective activities. Both the Authority and DCPS offered their perspectives on the availability of funds issue discussed in the report. DCPS stated that funds were not available to DCPS for capital projects until April 1997. In that regard, the Authority stated that it advises the Office of the Chief Financial Officer of the District regarding the availability of funds which, in turn, is responsible for communicating with District agencies, including DCPS. The Authority and DCPS also suggested additional discussion of the impact of the D.C. Superior Court ruling related to the roof repair projects. The Authority noted that the additional requirements imposed by the court ruling increased the difficulty of project management and added to the cost of the repair program. Similarly, in several sections of its comments to our draft report, DCPS referred to the July 11, 1997, court order as imposing restrictions, compressing the work schedule, and ultimately delaying the opening of all District public schools until September 22, 1997. Regarding the availability of funds to DCPS during fiscal year 1997, as discussed in the report, we were not provided documentation that would establish when DCPS was notified that the Authority had funds available for capital projects. This communication issue, which apparently is not isolated to the DCPS capital projects funding, was highlighted in the most recent report of the independent public accounting firm hired by the District. As noted in our report, the independent auditors identified a material weakness concerning control over transactions involving the Authority. The report indicated that the District has not developed adequate procedures to account for funds held by the Authority and does not effectively reconcile the amounts which are recorded. The auditor noted that the District and the Authority have not developed procedures to notify each other of the amounts anticipated or actually received by the Authority on behalf of the District. Concerning the impact of the court involvement, as discussed in our report, there were a number of factors that were either within or outside the managerial control of the Authority and current or former DCPS management. We do not offer any view on whether any one of these factors was the dominant reason for either the cost or timing issues concerning the roof repairs or whether current DCPS management could have reasonably mitigated those effects. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 15 days from the date of the report. At that time, we will send copies of this report to the Ranking Minority Member of your Subcommittee and the Chairmen and Ranking Minority Members of the Senate and House Committees on Appropriations and their Subcommittees on the District of Columbia and the Subcommittee on the District of Columbia of the House Committee on Government Reform and Oversight. We will also send copies to the Chairman of the District of Columbia Financial Responsibility and Management Assistance Authority, the Chief Financial Officer of the District of Columbia, and the Chief Executive Officer of the District of Columbia Public Schools. Copies will be made available to others upon request. Major contributors to this report are listed in appendix VI. If you or your staff need further information, please call me at (202) 512-4476. Parents United for the District of Columbia, an education advocacy group, filed a lawsuit against the former Mayor, the District of Columbia, and the Fire Chief of the D.C. Fire Department alleging failure of the D.C. officials to adequately inspect for and remedy violations of the District of Columbia Fire Prevention Code and other safety hazards in the public schools. A trial was held regarding the Parents United lawsuit. The trial resulted in a D.C. Superior Court Order requiring: (1) the D.C. Fire Chief to conduct semiannual inspections of every public school in the District and to submit reports of fire code violations to the Court and the plaintiffs, (2) the Fire Chief to order the immediate closing of any public school building in D.C. with life threatening fire code violations, including ruptured ceilings, and (3) the plaintiffs to file reports with the Court detailing the abatement or the abatement plan for the fire code violations noted. The District of Columbia Public School Superintendent’s Task Force on Education Infrastructure for the 21st Century issued the Preliminary Facilities Master Plan 2005 for the District of Columbia Public Schools. The task force was established by the Superintendent of D.C. schools to address the aging and physical deterioration of the D.C. public schools. Public Law 104-134 was enacted, requiring the General Services Administration to provide technical assistance to the District of Columbia Public Schools and to assist the District of Columbia Public Schools in developing a facilities revitalization plan. The General Services Administration was to consider the Preliminary Facilities Master Plan 2005 for the District of Columbia Public Schools in the development of the facilities revitalization plan. A Memorandum of Understanding between the General Services Administration and the Superintendent of the District of Columbia Public Schools was signed, requiring the General Services Administration to provide technical assistance and related services to the District of Columbia in the development of a repair and capital improvement program for the District of Columbia Public Schools. Public Law 104-194, the 1997 Appropriations Act for the District of Columbia, was enacted, providing $9.2 million for school repairs in a restricted line item. September 30, 1996 Public Law 104-208 was enacted, providing Student Loan Marketing Association (Sallie Mae) and College Construction Loan Insurance Association (Connie Lee) funds as well as transferring the $9.2 million from Public Law 104-194 to the Authority to finance D.C. public school facility construction and repair. The law also gave the Authority authorization to contract out for public school repair, in consultation with the General Services Administration. Further, the General Services Administration was required to assist in the short-term management of the repairs and capital improvements. (continued) The Authority received $11.5 million from fiscal year 1996 general obligation bond proceeds to be used for D.C. public school repairs and capital improvements. The Authority restructured the District of Columbia Public School by establishing a Board of Trustees and replacing the then Superintendent of Schools with a new Chief Executive Officer. November 19, 1996 A Memorandum of Understanding between the General Services Administration and the Authority was signed, requiring the General Services Administration to provide program management services to assist in the short-term management of the repairs and capital improvements for the District schools, per Public Law 104-208. The District of Columbia Public School Chief Executive Officer hired a Chief Operating Officer to manage and implement the school facilities improvement program. The General Services Administration provided the District of Columbia Public Schools with a facilities revitalization plan as agreed to in the Memorandum of Understanding dated July 25, 1996. The General Services Administration issued Notices to Proceed to roofing contractors for certain D.C. public schools. The District of Columbia Public Schools submitted a draft Long-Range Facilities Master Plan to the D.C. Council for approval. The plan included a priority listing of 50 schools to receive roof replacement in Fiscal Year 1997. The Authority received $18.25 million from the federal government’s sale of Connie Lee to be used for D.C. public school repairs and facility construction. The District of Columbia Public Schools submitted a request to D.C. Office of Budget and Planning for $28.5 million for capital improvements. District of Columbia Public School Chief Operating Officer hired a Chief of Capital Projects to direct the program management, program planning and control, and design review team managers. The Authority requested $36.85 million in supplemental funds from Congress for emergency public school facility improvements. Congress declined to provide any additional funds. The District of Columbia Public Schools submitted a revised Long-Range Facilities Master Plan to the D.C. Council for approval. The plan was also submitted to the Congress. The plan included a priority list of 50 schools to receive roof replacement in fiscal year 1997. The priority list changed slightly—Tyler was added to the list of school roof projects to be managed by the District of Columbia Public Schools, and Spingarn no longer appeared on the list of school roof projects to be managed by the General Services Administration. The District of Columbia Public Schools issued a Request for Qualifications to pre-qualify potential roofing contractors. (continued) The Authority received $20 million from the May 28, 1997, general bond proceeds to be used for school repairs and capital improvements. District of Columbia Public Schools recessed for summer vacation. The District of Columbia Public Schools issued an Invitation for Bid and Contract notice seeking a single contractor to perform 15 roof repair projects and 5 boiler/chiller projects. No bids were received. The District of Columbia Public School Chief Operating Officer testified before D.C. Superior Court that there were 47 school roof repair projects scheduled and that some roofs would not be completed before September 20, 1997. The 47 schools listed differed from the priority list included in the April 25, 1997, Long-Range Facilities Master Plan. For example, the 47 school roof repair projects did not indicate that roof repairs would be performed at 13 of the schools on the roof repair list included in the Long-Range Facilities Master Plan, dated April 25, 1997. A District of Columbia Superior Court judge reiterated the June 10, 1994, Order and stated that schools would be closed while roof work was performed. The Order also required the District of Columbia Public Schools to submit a plan, by August 18, 1997, to the Superior Court detailing alternative sites for students to report to on September 2, 1997, the first day of the 1997-1998 school year. The District of Columbia Public Schools issued an amendment to the July 1, 1997, Invitation for Bid and Contract notice. The amended Invitation for Bid and Contract notice divided the required construction work into packages. There were six roof repair packages at a total of 48 schools, and two boiler/chiller packages at a total of 16 schools. Contractors were asked to submit bids on one, more, or all project packages. The schools scheduled for roof repairs indicated on the Invitation for Bid and Contract differed somewhat from the schools scheduled for roof repairs indicated on the July 11, 1997, Order. For example, the Invitation for Bid and Contract included roof repair projects at seven schools that were not listed on the July 11, 1997, Order. The District of Columbia Public Schools submitted a request to D.C. Office of Budget and Planning for an additional $20 million for capital improvements. The District of Columbia Public Schools submitted a revised Long-Range Facilities Master Plan to the D.C. Council for approval. The plan included a priority listing of 56 schools to receive roof replacement in fiscal year 1997. The priority list included thirteen schools that were not indicated in the July 11, 1997, Court Order and 6 schools that were not on the amended (July 11, 1997) Invitation for Bid and Contract. The District of Columbia Public Schools issued first Notices to Proceed to roofing contractors. (continued) The District of Columbia Public Schools submitted a report to the Superior Court stating that there was no contingency plan for relocating students and staff who attend those schools where roof repairs were taking place, and that the plan was to delay the start of the school year until roof repairs were completed (September 22, 1997). The Authority received $36.8 million of Sallie Mae proceeds (from stock warrants) to be used for school repairs and capital improvements. September 22, 1997 District of Columbia public schools opened, commencing the 1997-1998 school year. The Authority received $5 million of Sallie Mae proceeds (from the sale of naming rights) to be used for school repairs and capital improvements. A settlement was reached among Parents United, the Mayor, the Fire Chief, and the District of Columbia Public Schools Chief Executive Officer, which laid the foundation for ensuring that D.C. public schools were free of Fire Code violations and requiring the District of Columbia Public Schools to continue the necessary repairs and capital improvements to the school buildings, as indicated in the Long-Range Facilities Master Plan. The following are GAO’s comments on the District of Columbia Financial Responsibility and Management Assistance Authority’s letter dated February 20, 1998. 1. Our report does not address whether ample funding was available for the emergency school repair program during fiscal year 1997. However, table 1 in the report shows that DCPS had about this same amount of funds ($86.5 million) available for capital projects during the fiscal year. 2. This point is discussed in the Comments and Our Evaluation section of the report. 3. We have augmented our discussion in the Planned Roof Repairs section of the report to refer to the additional $5 million from Sallie Mae. The report refers to the Authority’s commitment to provide a minimum percentage of the District’s general fund long-term financing authority (annual bond proceeds) for completion of repairs required by the Long-Range Facilities Master Plan. The following are GAO’s comments on the District of Columbia Public Schools’ letter dated February 17, 1998. 1. This point is discussed in the Comment and Our Evaluation section of the report. 2. We modified this section of the report slightly. Of the 46 schools at which DCPS-managed roof work during fiscal year 1997, DCPS received three to five bids for 29 schools; 2 bids for each of 9 schools; and one bid for each of the remaining 8. 3. We modified the report to provide additional information concerning bidder risk associated with the extensive deferred maintenance and the short time frames provided for submitting bids and completing the work. A petroleum compound, dark brown or black in color, used in the manufacture of roofing products. Coarse stone, gravel slag, etc., used as an underlayer for poured concrete. Asphalt or coal-tar pitch. Sections of wood built into a roof assembly, usually attached above the deck and below the membrane or flashing, used to stiffen the deck around an opening, act as a stop for insulation, support a curb, or to serve as a nailer for attachment of the membrane and/or flashing. A continuous semiflexible roof covering of lamination, or plies, or saturated or coated plies alternated with layers of bitumen, surfaced with mineral aggregate or asphaltic materials. A continuous strip of flashing forming a triangle with a structural deck and a wall or other vertical surface. A material used as the exterior wall enclosure of a building. A number of columns supporting one side of a roof. Top covering of a wall that is exposed to the weather, usually made of metal, masonry, or stone. It is preferably sloped to shed water back onto the roof. Metal strips used to prevent moisture from entering the top edge of roof flashing, as on a chimney or wall. A terminal structure, square or round, rising above a main roof. While generally ornamental, a cupola can provide for ventilation. The molded and projecting horizontal member that crowns a wall. The structural surface to which a roof covering system is applied. The architectural concept of a building as represented by plans, elevations, renderings, and other drawings. The design-build approach gives a single contractor the responsibility for both designing and constructing a project rather than separating the responsibilities among a number of contractors. A conduit that carries runoff water from a scupper, conductor head, or gutter of a building to a lower level, or to the ground or storm water runoff system. An outlet or other device used to collect and direct the flow of runoff water from a roof area. Ethylene Propylene Diene Monomer (rubber roof). A forecast of construction cost based on a detailed analysis of materials and labor. Also referred to as a conceptual estimate or parametric estimate. A structural separation between two building elements that allows free movement without damage to the roofing or waterproofing system. A vertical or steeply sloped roof or trim located at the perimeter of a building. Typically, it is a border for the low-slope roof system that waterproofs the interior portions of the building. Strips of copper, aluminum, galvanized sheet metal, or similar materials used along walls, dormers, valleys, and chimneys to prevent moisture seepage. The procedure in which a controlled amount of water is temporarily retained over a horizontal surface to determine the effectiveness of the waterproofing. Cutting and fitting panes of glass into frames. A low profile upward-projecting metal edge flashing with a flange along the roof side, usually formed from sheet or extruded metal, designed to prevent loose gravel from washing off the roof and to provide a finished edge detail for the built-up roofing assembly. A channelled component installed along the downslope perimeter of a roof to carry runoff water from the roof to the drain leaders or downspouts. Materials designed to reduce the flow of heat either into or from a building. Anything constructed of material such as brick, stone, concrete blocks, or ceramic blocks. A roofing bitumen which generally has been rubberized or plasticized to provide greater elasticity, flexibility, and improved working characteristics. A low, retaining wall at the edge of a roof. Usually an upward extension of a building’s exterior curtain wall. In masonry construction, a coat of cement (generally containing dampproofing ingredients) on the face of rough masonry, the earth side of foundation, or basement walls. A triangular face forming the gable of a two-pitched roof. The incline, or slope, of a roof. A flanged metal container placed around a column or other roof penetrating element and filled with flashing cement to seal the area around the penetration. A single layer of organic or inorganic roofing material in a roof membrane or roof system. The practice of removing an existing roof system down to the roof deck and replacing it with a new roofing system. The process of removing deteriorated mortar from an existing masonry joint and troweling new mortar or other filler into the joint. The process of recovering, or tearing off and replacing an existing roof system. Where the rising sides of the roof come together. The highest point of the roof. An assembly of interacting roof structures and components designed to be weatherproof, and normally to insulate the building’s top surface. A relatively small raised substrate or structure that directs surface water to drains or a valley; is often constructed like a small hip roof or like a pyramid with a diamond shaped base. An opening cut through the wall of a building through which water can drain from a floor or roof. Roof covering made from asphalt, fiberglass, wood, aluminum, tile, slate, or other water-shedding material. A roof accessory, set over an opening in the roof, designed to admit light. Normally transparent, and mounted on a raised framed curb. A small masonry block laid on the ground below a downspout to carry roof drainage away from a building. See Deck. A strip used to elevate and slope the roof at the perimeter and at the curbs. In traditional project organization, the owner hires the services of a design team and a construction team. The design team is responsible for transmitting owner/user needs in plan documents describing the physical form for the construction team to assemble. Where two roofs coming from different horizontal directions meet and form an internal angle. Roof section broadly extended or projecting at an angle from the main building. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the District of Columbia Public Schools' (DCPS) efforts to repair school roofs during the summer of 1997, focusing on the conflicting information on the availability of funds to pay for the roof work and the cost, including cost per square foot, of the work completed in fiscal year (FY) 1997. GAO noted that: (1) sufficient funding was available to begin roof work when schools were closed for the summer June 20, 1997; (2) the District's records show that the Financial Responsibility and Management Assistance Authority had about $18 million available in March 1997 for DCPS-managed roof work, with the available amount increasing to about $38 million by June 1997; (3) a series of events preceeding the efforts to repair D.C. school roofs contributed to the delayed start; (4) although it was decided that DCPS would manage the majority of this work, DCPS was not prepared to start immediately because it had not completed sufficient planning, such as determining the scope of work on individual projects which would be the basis for seeking bids for that work; (5) a contributing factor to this delay was the almost complete turnover in technical capital project staff during the school year; (6) these problems were compounded by difficulties in securing bids, resulting in DCPS-managed work not starting until the third week of July; (7) DCPS stated that at the time the long-range plan was submitted in February 1997, it had expected to complete roof work by the end of October 1997 but accelerated it in response to a court order that roof work would not be done while classes were in session; (8) consequently, the work was accomplished under a highly compressed schedule; (9) GAO's review showed that DCPS spent about $37 million for roof replacement or repair in FY 1997; (10) this included a extensive amount of work not only on the roofs, but also on adjacent upper portions of the buildings to achieve structurally sound, watertight facilities; (11) as a result, the costs were higher than what would have been incurred for roofing work only; (12) considering all of these costs, the average cost per square foot of roof surface replaced or repaired was about $20, with DCPS-managed contracts somewhat higher than those managed by the General Services Administration; (13) insufficient data exist to ascertain with any certainty the added cost associated with the degree of deferred maintenance encountered in this extensive project; (14) years of neglect and inadequate repair and maintenance practices all served to increase costs over what could be expected in well-managed, adequately financed entities; (15) DCPS plans for FY 1998 show additional roof work at 40 more schools at an approximate cost of $35 million; and (16) in addition, DCPS proposed Capital Improvement Program Plan for fiscal years 1999-2004 indicates that an additional $63 million is anticipated for roof replacement or repairs during this period.
The African continent is the second largest continent in terms of land mass and population, comprising 54 culturally diverse countries, many with distinct histories and identities. The continent is about three times the size of the United States, roughly the size of Argentina, China, India, Kazakhstan, Mexico, and the United States combined (see fig. 1). African countries are politically varied, ranging from dictatorships to emerging democracies. African countries also vary in the types and quantities of natural resources they control and in the size and strength of their economies. For example, the gross domestic product of African countries ranged from about $145 million to $277 billion in 2007, with countries rich in natural resources, such as petroleum and diamonds, generally having larger economies. In comparison, the gross domestic product of the mparison, the gross domestic product of the United States was almost $14 trillion in 2007. United States was almost $14 trillion in 2007. Area (in millionof uare mile) World Development Indicators database, World Bank, September 10, 2008. economic benefits both locally and globally. In particular, the remoteness and size of some African countries, coupled with underdeveloped—and sometimes unsafe—road networks, makes air transport critical for connecting some African markets to other African markets, the United States, and the rest of the world. Our literature synthesis suggests that safe aviation could increase connectivity and potentially create economic and social benefits for a country. For example, aviation can contribute to sustainable development by facilitating tourism and trade. Such development, in turn, generates economic growth, provides jobs, and can improve living standards, alleviate poverty, contribute to social stability, and increase tax revenues. Similarly, according to the literature we reviewed, when a developing country creates additional airline connections with other countries, it may derive potential economic benefits in the form of increased exports, as well as tourism and business opportunities. For example, Africa is a growing export market for U.S.- manufactured products, including aircraft and air navigation systems. According to DOT reports, several African countries have stated their intention to purchase aviation security equipment based on the same technologies as equipment donated to them by the United States. The literature we reviewed also mentions other potential benefits to improving aviation safety in Africa, including the following: Improved safety of the global aviation system. Aviation is a global enterprise, and maintaining a safe system is the foundation upon which the entire global aviation system network operates. One country’s failure to comply with international aviation safety standards could have disastrous consequences for other countries’ air carriers and passengers. Improved U.S. national security. Civil aircraft traveling from some African countries to other parts of the world potentially pose a threat to U.S. national security because adequate safety and security measures are not in place in those countries. In particular, African countries with weak aviation oversight are more likely to have airports that act as transit points for illicit activities, such as arms transfers and trafficking, encouraging criminals to establish organizational bases in these areas. Efforts have been made to improve connectivity as a means of creating economic benefits for both the United States and African countries, as well as for pursuing strategic and foreign policy interests. For example, in 2000, Congress identified Africa as a strategic trading partner under the African Growth and Opportunity Act (AGOA). AGOA provides duty-free access for over 6,000 products from 40 Sub-Saharan African countries and has served as the central U.S. trade and investment policy toward Sub-Saharan Africa. AGOA is aimed at promoting open markets, expanding U.S.-Africa trade and investment, stimulating economic growth, and facilitating Sub-Saharan Africa’s integration into the global economy. Under AGOA, U.S. trade with Africa has grown substantially. For example, U.S. imports under AGOA have more than tripled for apparel—from $359.4 million in 2001 to $1.3 billion in 2007—while U.S. exports to Sub-Saharan Africa have more than doubled from $7 billion in 2001 to over $14.4 billion in 2007. In addition, the United States has been engaged in various strategic and foreign policy interests in Africa. For example, the Department of Defense (DOD) maintains a small military presence in Djibouti to provide a regional security presence related to counterterrorism for several Horn of Africa and East African countries. Similarly, the Trans-Sahara Counterterrorism Partnership is a multiagency U.S. effort to provide support to nine north and western African countries relating to diplomacy, development assistance, and military activities aimed at strengthening country and regional counterterrorism capabilities. DOD’s plans to locate the U.S. Africa Command (AFRICOM) on the continent are under review, and a decision on whether or where will not be made until the end of 2011. However, efforts to improve connectivity and commerce between the United States and Africa have been hindered, in part, by the overall poor condition of African nations’ aviation systems. The African continent has historically had a poor aviation safety record, compared with other regions of the world. For instance, the annual average accident rate per 1 million flights for the African region over the last 4 years is about 15 times greater than for North America (see fig. 2). Moreover, according to federal and other officials, the accident rate in Africa is likely to be higher than reported because accidents involving small aircrafts are underreported. For example, according to ICAO, on average, about 70 percent of accidents in Africa were not reported from 1990 through 2006. However, the accident rate among African countries varies greatly. In particular, a few African countries have a much higher accident rate than other African countries and contribute disproportionately to the continent’s overall accident rate. For example, over half the total number of aviation accidents in Africa over the last 10 years occurred in 4 of the continent’s 54 countries. ICAO is the international body that seeks to harmonize global aviation standards so that worldwide civil aviation can benefit from a seamless air transportation network. ICAO members, known as contracting states, including the United States, are not legally bound to act in accordance with ICAO standards and recommended practices. Rather, contracting states decide whether to transform the standards and recommended practices into national laws or regulations. In some cases, contracting states deviate from some of the ICAO standards and recommended practices, or do not implement some of them at all when they find it impracticable to do so. Contracting states are also responsible for the establishment of a regulatory framework to provide safety oversight for their civil aviation systems, and for developing the required aviation infrastructure necessary to maintain a safe, secure, and sustainable system. FAA is responsible for regulating the safety of civil aviation in the United States. FAA also works to advance the nation’s leadership on the international level by engaging in dialogue with aviation counterparts across the world, collaborating with ICAO, providing technical assistance and training, working to harmonize global standards toward developing a seamless air transportation network, and sharing expertise and technologies. In 1992, FAA established the International Aviation Safety Assessments (IASA) program based on its own and congressional concerns that the level of safety oversight being applied by other civil aviation authorities with air service to the United States was inadequate and not in compliance with international safety standards. The IASA program examines the ability of foreign countries, not individual air carriers, to adhere to international standards and recommended practices for aircraft operations and maintenance established by ICAO. FAA generally conducts a safety assessment when a foreign air carrier files an application with DOT requesting to initiate new air service to the United States, or take part in a code-share arrangement with U.S. airlines. FAA also conducts a safety assessment when reliable information indicates that another country with operators providing service to the United States has serious aviation oversight deficiencies. In conducting these assessments, FAA meets with officials from the foreign civil aviation authority and foreign air carrier and reviews pertinent records. FAA uses a two-tier rating system for the results of the assessments: Category 1 for countries that comply with ICAO standards and Category 2 for countries that do not. FAA uses this determination as part of its basis for recommending whether or not DOT should allow air carriers overseen by certain foreign civil aviation authorities to initiate, continue, or expand air service to the United States. In particular, air carriers in foreign countries without a Category 1 rating cannot initiate or continue service to the United States, take part in code-share arrangements with U.S. air carriers, or effectively increase air traffic with the United States. Currently, five African countries have a Category 1 rating: Cape Verde, Egypt, Ethiopia, Morocco, and South Africa. Partly because of the small number of Category 1 countries, direct connections between Africa and the United States are currently limited. In fact, only one U.S. commercial airline provides direct passenger service to the continent as of June 2009. Furthermore, there are only eight direct connections between U.S. cities and an African city, and three of these connections are provided solely by foreign air carriers (see fig. 3). According to our literature synthesis and U.S. and African officials we interviewed, the major challenge in improving aviation safety is that the highest levels of government in some African nations have not made it a priority. We have previously identified leadership support as critical to fundamental organizational changes—such as those required to prioritize aviation safety in some African countries. According to U.S. federal officials and ICAO representatives, making aviation a governmental priority is critical to the successful transformation of African civil aviation authorities. In fact, we found that in African countries that have succeeded in improving aviation safety and generating economic benefits, like Cape Verde (see table 1), top leadership’s clear and personal involvement has set the direction for civil aviation officials to act upon. However, according to U.S. government and African officials, many political leaders in African countries have not prioritized aviation safety, in part because of more pressing priorities, such as poverty, health care, and basic nutrition. Some African officials told us that aviation is seen as a luxury for the affluent in African society, and these perceptions pressure governmental leaders to give lower priority to improving aviation safety and to use resources for issues that affect a larger segment of the African population. These officials further said that African political leaders often do not realize the potential benefits, such as increased tourism, that can flow from improved aviation safety. The lack of priority for improving safety may create or exacerbate other challenges frequently identified in the literature we reviewed and by officials we interviewed, including weak aviation regulatory systems, a lack of resources, inadequate infrastructure, a lack of human capital expertise, and a lack of training capacity. These challenges are not mutually exclusive, since most are affected by or contribute to the other challenges. Weak aviation regulatory systems. ICAO recommends that civil aviation authorities be created as politically and financially independent bodies. Accordingly, an authority should be independently funded and (1) have its own financial resources, (2) have the authority needed to issue aviation standards and regulations and conduct safety oversight of air operators; and (3) establish requirements for the certification of air operators. These are among the critical elements of a safety oversight system designed to ensure the implementation of ICAO standards and recommended practices. According to DOT officials, however, many African civil aviation authorities do not have sufficient regulatory autonomy or stable and reliable revenue sources to comply with ICAO standards. For example, some officials we interviewed stated that some African civil aviation authorities’ budgets are linked to their countries’ general treasuries or transportation ministries, making the authorities susceptible to political interference. Moreover, because they are not independent entities, some civil aviation authorities can have their decisions overturned by higher- ranking government officials. For example, according to several officials we interviewed, a decision to ground two aircraft because of safety concerns in one African country resulted in the firing of the civil aviation authority head. According to representatives from the United Nations’ World Food Program, a program that uses the aviation system to deliver humanitarian aid, these weak regulatory systems allow unsafe aviation practices—such as certifying outdated and poorly maintained aircraft in some African countries—to go unchecked. According to DOD, the ability of each African country to have a civil aviation authority that meets international standards of oversight is critical for the safety of DOD’s aviation operations on the continent and to mission success. Lack of resources. Some African countries lack sufficient revenues to improve the safety of their aviation systems. A World Bank official told us that only a few countries in Sub-Saharan Africa have an aviation market with sufficient passenger traffic to generate sustained funding for aviation safety improvements. Furthermore, aviation officials from all four of the African countries we visited told us that obtaining adequate funding to properly maintain their aviation system was a major challenge. For example, according to Tanzanian civil aviation officials, they have not been able to make needed aviation safety improvements because their authority does not generate sufficient revenue from air traffic. Moreover, revenue generated through such mechanisms as landing fees are not always dedicated to the aviation system in some African countries; rather, the governments use this revenue for other priorities. Finally, because of the low priority placed on improving aviation safety in some African countries, African aviation officials told us that it can be difficult to secure additional government funding for safety improvements. Inadequate infrastructure. Partly for lack of resources, the aviation infrastructure in many African countries is insufficient, outdated, or in otherwise poor condition, which can lead to safety hazards. For example, as discussed previously, airspace in some regions of Africa is not controlled by air navigation systems. The lack of such technology increases the potential for midair collisions, affecting both civilian and military aviation. For example, DOD officials told us that the lack of air navigation systems affects military aviation operations, such as carrying out missions and conducting training exercises, on the continent. To reduce the risk of collisions, officials from one African airline said they fly to certain regions only during daytime hours. African airports also sometimes lack basic infrastructure, such as radar systems, adequate runway surfaces, and other navigation facilities, or the infrastructure they have is obsolete. For example, according to IATA, at many African airports, airfield lighting is not compliant with international aviation safety standards. Noncompliant airfield lighting contributed to a crash in Nigeria in December 2005 that killed 108 passengers. The runway lights were off, in part because the airport lacked the funds and resources to maintain a stable power supply from operating generators. According to Tanzanian airport officials, maintaining and improving airport infrastructure is the biggest challenge they face in attempting to improve their country’s aviation safety. Lack of human capital expertise. According to several U.S. and African officials, the lack of qualified aviation personnel, such as pilots, air traffic controllers, maintenance technicians, and flight inspectors, has been a major challenge for African countries. These officials stated that many African civil aviation authorities and air carriers find it difficult to attract and retain qualified personnel, primarily because of the low wages they pay. This problem becomes especially acute for some African civil aviation authorities trying to retain qualified inspectors, because their salaries are tied to the governmental pay structure, which is not competitive with the private sector. According to U.S. and African officials, aviation personnel leave African civil aviation authorities and air carriers for more lucrative positions, frequently with foreign air carriers in the Middle East and Asia, after gaining a few years’ experience in Africa—a phenomenon these officials referred to as “brain drain.” As a result, critical aviation positions, such as airworthiness inspection positions, go unfilled, leaving the country noncompliant with international aviation safety standards. We and others have identified the importance of a competent aviation inspector workforce to improve safety and compliance with safety standards. Lack of training capacity. Improving aviation safety in Africa has been hindered by the lack of training capacity in some African countries. Having inadequate financial resources and competing primary needs, many African countries do not have sufficient means to fund training for personnel in technical, management, and leadership disciplines. Two of the four countries we visited had training centers to train aviation personnel in various disciplines, such as air traffic control, flight operations, and airport security. However, the training center officials said they lacked important training capacity because of funding constraints. For example, officials said the centers had insufficient numbers of teachers and classrooms and lacked up-to-date training materials and equipment. Because they lack training capacity, many African civil aviation authorities send personnel to other countries, including the United States, for training, which can be costly and time-consuming. DOT’s SSFA program has been the principal U.S. aviation safety assistance program for African countries since its inception in 1998 as a presidential initiative. The program was established to promote sustainable improvements in aviation safety and security in Africa and to foster aviation growth between the United States and Africa. The program was designated in 2003 as the vehicle to support the goals of the 2003 East Africa Counterterrorism presidential initiative to advance the administration’s regional security strategy. According to DOT officials, the program was also incorporated into the administration’s strategy for working with Sub-Saharan African countries in 2007. DOT’s 2008 strategic plan describes the SSFA program as advancing the Department’s mission and objective of international outreach and global connectivity. Furthermore, according to FAA’s business plan, the program serves to coordinate and advance FAA’s international leadership objectives and activities in Africa. The SSFA program has three main goals: (1) increase the number of Sub- Saharan African countries that meet the ICAO aviation safety standards, (2) improve aviation security at a number of African airports, and (3) improve regional air navigation services in Africa by using modern satellite-based navigation aids and modern communications technology. DOT works to achieve these goals by providing training and technical assistance to the participating countries, including direct assistance from FAA. For example, DOT has provided training to over 1,200 aviation personnel from Africa through the SSFA program. Similarly, DOT and FAA collaborated with ICAO to formally develop model civil aviation regulations to provide countries participating in SSFA with a cohesive set of guidance materials to use in developing their own set of technical regulations and guidance materials. DOT’s Office of the Secretary manages the SSFA program, including identifying the program’s objectives, activities, and project time frames, as well as documenting the program’s results. FAA provides the technical expertise and other in-kind services to participating African countries, especially in technical areas such as safety oversight. For all participating SSFA countries, DOT works with FAA to conduct a baseline safety and security assessment, develop an action plan to remedy the identified deficiencies, and outline an assistance plan to guide the country’s efforts to address its aviation safety and security issues. The participating SSFA countries bear the primary responsibility for funding the improvements recommended by DOT. Currently, 10 African countries participate in the SSFA program (see fig. 4). A recent focus of the SSFA program is encouraging African countries to take a regional approach to address aviation safety challenges. DOT officials told us that a regional approach to safety allows countries to address resource, human capital, and training challenges by pooling and leveraging expertise and sharing costs. For example, rather than each country establishing individual training centers, countries can band together to establish regional training centers that could serve aviation personnel from all of the participating countries. Such an approach allows the countries to provide the necessary training, but with less money and fewer teachers than they would need to establish multiple, country- specific training centers. In 2007, as part of the SSFA program’s regionalism effort, three East African Community (EAC) countries (Kenya, Tanzania, and Uganda) established the first operational regional safety and security oversight organization in Africa—the Civil Aviation Safety and Security Oversight Agency (CASSOA)—to be responsible for, among other things, ensuring the development of a safe and secure civil aviation system, including uniform operating regulations that meet the international standards and standardized procedures for licensing, approving, certificating and supervising civil aviation activities. CASSOA was fashioned after aspects of ICAO’s regional safety oversight organizations, as discussed later in this report. According to FAA officials, one of the main focuses of CASSOA will be to assist in developing a pool of qualified, transnational inspectors who can be used in any of the EAC countries as needed. In addition to the SSFA program, DOT has other efforts to assist foreign countries, including African countries, in developing their civil aviation systems and improving aviation safety. In particular, FAA provides aviation safety technical assistance and training to countries across the globe. According to FAA, a key component of the agency’s technical assistance efforts is its technical reviews. A technical review is an evaluation of a country’s compliance with ICAO standards for aviation safety oversight. In these reviews, FAA technical teams apply the same criteria used in an IASA program audit and identify areas of noncompliance and work with the country to develop an action plan in order to implement the proposed corrective actions. The goal of the technical review is to provide a baseline for a country in order to help the country eventually meet ICAO standards and, potentially, IASA requirements. According to DOT, FAA has conducted technical reviews of seven African civil aviation authorities, in both SSFA and non-SSFA countries. For example, in July 2007, FAA conducted a technical review of the safety oversight capability of the civil aviation authority in Nigeria, a non-SSFA country. FAA also provides aviation-related training to the international community and supports ICAO contracting states and regional aviation organizations. For example, in July 2007, FAA helped South Africa review its aviation law and regulations prior to an IASA reassessment scheduled for later that year. DOT and FAA officials told us that resources for the SSFA program and other technical assistance efforts directed toward Africa have been unpredictable and constrained since the program began, hampering their efforts to carry out its objectives. The State Department provides funding for the program from one of its appropriations—the Economic Support Fund Account—and funding for the program has ranged from $8.5 million from the appropriation for fiscal year 2003 to zero from the appropriations for fiscal years 2008 and 2009 (see table 2). Funds that are provided for SSFA are available for obligation for 2 fiscal years, and in fiscal year 2008, DOT-obligated funds carried over from fiscal year 2007, according to DOT. Because of a continuing resolution, funds for SSFA remained available for obligation into fiscal year 2009, and a DOT official estimated that such funds could sustain the program for the remainder of the fiscal year. However, to stretch the resources through this date, DOT officials said they have limited SSFA activities, focusing only on countries that are making tangible progress in improving safety and regional initiatives. Other planned activities were delayed or cancelled. For example, because of limited funding, according to DOT officials, the SSFA program was unable to keep aviation safety personnel in Africa to provide on-site guidance and technical assistance. According to these officials, such on- site guidance and technical assistance would help African countries eliminate errors in implementing or interpreting aviation safety requirements and, over time, would reduce the amount of time spent working with them to meet international aviation safety standards. In addition, DOT officials said that one funding priority is helping EAC countries establish the newly created regional oversight organization. According to a State Department official, the fiscal year 2010 congressional budget justification for the department includes $2 million for the program. In addition to budgetary constraints, DOT and FAA officials told us, they have limited staff resources to work on aviation safety issues in Africa. Most of the DOT and FAA staff working on aviation safety in Africa also have other responsibilities that limit the amount of time they can spend on the SSFA program and other African initiatives. All of the African governmental officials we spoke with were appreciative of the technical assistance and training provided under the SSFA program, but many said additional assistance for implementing the technical advice provided by FAA would be very helpful. For example, EAC headquarters officials said the technical assistance through SSFA has helped EAC harmonize the civil aviation regulations for each country. However, they said the lack of funding and expertise will make the next step in the process— implementing regulations in each member country—difficult. In addition to the training and technical assistance provided by DOT and FAA, USTDA and MCC have provided funding for aviation-related projects, including safety improvements, in Africa. Neither of these agencies has an aviation-related mission; rather the missions of these agencies focus on promoting economic development in countries around the world. However, given the potential economic benefits associated with improved aviation systems, USTDA and MCC, in total, have funded over two dozen aviation-related projects in various African countries, including the following: Over the past 10 years, according to USTDA, the agency has provided over $6.1 million in funding for 26 aviation-sector projects throughout Sub- Saharan Africa. These projects typically focus on providing technical assistance or conducting feasibility studies for African governments or private-sector entities. For example, USTDA provided $460,000 to the Malawian Ministry of Transport and Public Works to assist in establishing an autonomous civil aviation authority with a supportive legal and regulatory framework and adequate institutional capabilities. USTDA has also funded aviation projects in several African countries in an effort to strengthen regional air traffic management and communications structures. For example, USTDA has provided about $1.7 million for conducting feasibility studies for air traffic management development and for modernizing three regional groups’ upper airspace. The benefits expected from these efforts include improved air traffic safety and regional coordination, and increased revenues for the member countries. In addition, USTDA has sponsored training and seminars. For example, in 2002, USTDA provided about $84,000 for an orientation visit in which 18 delegates from nine African countries traveled to Washington, D.C., to meet with government and private-sector representatives on project- specific opportunities in Africa, and on the role and development of air cargo transportation in AGOA. Also, in November 2008, USTDA and DOT partnered to sponsor a workshop in Washington, D.C., to bring together ministers and senior officials from eastern African countries, U.S. government officials, and private-sector representatives to discuss transportation needs and regional solutions to transportation infrastructure challenges in East Africa. MCC has funded aviation-related projects for Mali and Tanzania. MCC provides its assistance through compact agreements, or multiyear agreements between MCC and an eligible country. Compact agreements were signed with Mali in November 2006 and with Tanzania in February 2008. Under these agreements, MCC has provided about $183 million and $7 million, respectively, for airport infrastructure projects. U.S. efforts on the continent have not consistently been coordinated. The SSFA program began as a collaborative effort between DOT and other U.S. agencies. Throughout the program’s existence, DOT has pursued collaborative efforts, such as regular briefings to the State Department on program developments and formal and ad hoc discussions and meetings with USTDA and MCC. Currently, multiple federal agencies are working to improve aviation safety or are funding aviation-related projects in Africa. However, these agencies’ missions do not focus specifically on improving aviation safety. These agencies have distinct missions and, consequently, their efforts on the continent have different purposes, but their efforts nonetheless intersect. Recognizing the interrelatedness of their efforts, DOT has used memorandums of agreement with several federal agencies to coordinate aviation-related efforts in Africa to prevent duplication and to ensure that federal funding is put to best use in the aviation sector. DOT officials told us these memorandums of agreement are mechanisms to provide recommendations based on international standards and coordination with SSFA activities. In addition, USTDA and FAA jointly formed an Interagency Committee on International Aviation Safety and Security in 2004 to coordinate technical assistance in the areas of aviation safety and security in developing countries. The committee was formed to strengthen the impact of U.S. aviation and security assistance through a strategic, governmentwide focus on priority projects, and to target U.S. assistance to those countries that are committed to progress and capable of both improving and maintaining their safety and security performance. These mechanisms have not consistently worked as intended. For example, MCC and DOT signed a memorandum of understanding to ensure coordination on related projects. However, circumstances surrounding the MCC aviation project in Mali demonstrate a need for improved coordination between the two agencies. According to FAA officials, MCC did not have prior consultations with them on MCC’s aviation project in Mali, even though FAA was actively working with Mali on aviation safety issues. Rather, DOT and FAA officials said they learned about MCC’s project through an MCC contractor. In contrast, MCC officials told us that they did coordinate with DOT on the Mali project, noting that DOT officials attended several meetings held prior to the signing of the compact with Mali in which the compact was discussed. DOT and FAA officials told us that increased collaboration is needed among federal agencies providing aviation-related assistance to Africa to leverage limited resources and minimize duplication of effort. The officials pointed out that in some instances other agencies and organizations that provide funding for aviation infrastructure, technical assistance, and training projects may not have the aviation expertise needed to determine whether the projects meet international aviation safety standards. As a result, investments provided to fund projects that do not meet international aviation safety standards may not allow African countries to reap the potential economic benefits associated with enhancing air connectivity with the United States. In addition, we have previously reported on the importance of coordinating federal efforts, especially when these efforts target the same population, to prevent duplication and fragmentation of effort. This potential for overlap and fragmentation underscores how important it is for the federal government to develop the capacity to more effectively coordinate crosscutting program efforts. Our work also indicates that coordinating crosscutting programs is a persistent challenge for executive branch agencies, and in addressing these challenges, agencies will need to overcome barriers, such as disparate missions and other incompatibilities. Agencies can enhance and sustain their collaborative efforts by developing a strategy that includes necessary elements for a collaborative working relationship, such as defining and articulating a common outcome; identifying and addressing needs by leveraging resources; agreeing on roles and responsibilities; establishing compatible policies, procedures, and other means to operate across agency boundaries; and developing mechanisms to monitor, evaluate, and report on results. The international community also has taken steps to improve aviation safety in Africa. Addressing issues on the continent has been elevated in international aviation organizations, institutions that represent sovereign nations or foreign governments, and other international organizations. Many of these steps, such as improving aviation oversight, increasing training, and improving infrastructure, address the challenges involved in improving aviation safety in Africa. The following are among the international efforts most frequently mentioned by officials we interviewed. International Civil Aviation Organization has strengthened its focus on the safety oversight capacity of African countries. ICAO implemented the Universal Safety Oversight Audit Program in 1999 as an auditing tool to determine contracting states’ capability for safety oversight by assessing the states’ implementation of a safety oversight system and identifying areas of concern. Findings from ICAO’s audits revealed that a number of African countries lack the resources and regulatory framework necessary to fulfill their safety oversight responsibilities, and vary widely in their ability to provide safety oversight. In 2007, audits of 27 African countries showed that, on average, these countries were not effectively implementing over half of ICAO’s eight critical elements of a safety oversight system, with the proportion of standards effectively implemented ranging from about 9 percent to about 94 percent. As a result, ICAO has been involved in several initiatives to help African countries improve aviation safety. The first major initiative, the Comprehensive Regional Implementation Plan for Aviation Safety in Africa (AFI Plan), was developed in 2007 to address aviation safety concerns and support African countries in meeting their international obligations for safety oversight. The plan was intended to coordinate and lead all of ICAO’s efforts for addressing aviation safety issues in Africa with clearly defined objectives, outputs, activities, and metrics. Like the SSFA program, ICAO has worked with African nations to share aviation oversight responsibilities through regional organizations in its Cooperative Development of Operational Safety and Continuing Airworthiness Program (COSCAP). Under this program, African countries have begun to consider the benefits of coordinating aviation oversight responsibilities to enhance the safety of air transport operations in their respective regions. For example, eight countries in western Africa formed the West Africa Economic and Monetary Union COSCAP. This COSCAP established a cooperative arrangement for the member countries to provide collaborative safety oversight for the subregion to enhance the safety and efficiency of air transport. According to literature sources and ICAO officials, these regional organizations will enhance the ability of civil aviation authorities in Africa to provide safety oversight by addressing resource, human capital, and training challenges. ICAO has also created a database for information on aviation safety and security assistance provided to African countries by contracting states. The purpose of this database is to facilitate the coordination of assistance in order to better leverage limited resources. According to ICAO officials, assistance provided to African countries to improve aviation safety is largely uncoordinated, creating the potential for efforts that are duplicative or serve cross purposes. Similar to U.S. officials, ICAO officials have noted that from an international perspective, many countries and organizations are eager to support aviation safety efforts in African nations, and thus offer various forms of assistance, including funding. Furthermore, the officials noted that with limited resources, African nations have little incentive to turn away assistance from donor countries even if it overlaps with assistance from another country. International Air Transport Association provides aviation operational safety audit tools and support for members. The IATA Operational Safety Audit (IOSA) program was initiated in 2001 and is an evaluation system designed to assess the operational management and control systems of an airline. Starting in 2008, IATA required that its members pass an IOSA audit as a condition of membership. To help its members identify operational gaps when preparing for safety audits, IATA developed a technical assistance program for member airlines, including African airlines. Nigeria affords an example of an African country’s participation in ICAO and IATA programs (see table 3). IATA also provided $3.7 million to initiate the Implementation Program for Safe Operations in Africa, which is designed to improve aviation safety by providing African airlines with access to IATA’s Flight Data Analysis tool. This tool monitors and collects data from airplanes, allowing airline officials to analyze data from actual flights to improve procedures, monitor compliance, and identify trends for aircraft maintenance. The initiative gives up to 30 African airlines free access to the Flight Data Analysis tool for 3 years. IATA has been involved in several efforts related to improving airport infrastructure as a means to improve aviation safety in Africa. For example, IATA addresses airport deficiencies by performing on-site visits and bringing relevant reports to the attention of the local and national authorities. IATA also regularly organizes technical missions to African countries. On these missions, IATA conducts airport operations assessments and discusses issues of common interest with the civil aviation and airport authorities, including infrastructure deficiencies, priorities for remedial action, possibilities for cooperation between IATA and the authorities, and future development plans. Eleven technical missions were held in Africa in 2007. The European Union (EU) publishes a list of banned airlines to encourage airlines to improve safety. The EU publishes a list of banned airlines that are restricted from operating in the EU because they are deemed to be out of compliance with international aviation safety standards. In 2005, the EU developed the list of banned airlines in response to several fatal aircraft crashes in 2004 and 2005. European Commission officials told us that the list of banned air carriers is both a preventive and a dissuasive measure—in particular, the threat of being placed on the list encourages airlines to take the measures necessary to improve safety within the shortest possible time. In November 2008, the EU declared that 168 air carriers—100 of which are from African countries—were noncompliant with international aviation safety standards and banned them from operating at EU members’ airports. Unlike FAA’s IASA program, which focuses on foreign countries’ aviation regulatory framework, the EU’s approach primarily focuses on the operational safety of individual airlines. However, the EU may ban all air carriers from a particular country if it finds systemic safety deficiencies on the part of air carriers certified by that country’s civil aviation authority. For instance, the November 2008 list included all air carriers certified in the Democratic Republic of Congo, Equatorial Guinea, Sierra Leone, Liberia, and Swaziland because previous safety audits have indicated serious deficiencies in the capability of the civil aviation authorities of these countries to perform their air safety oversight responsibilities. According to European Commission officials, when an operating ban has been imposed on an air carrier, the European Commission provides technical assistance to the air carrier and coordinates with the respective civil aviation authority to remedy the deficiencies that resulted in the operational ban. In April 2009, the European Commission and the African Union Commission held an aviation conference in Namibia to address the critical issue of aviation safety in Africa, among other items. An outgrowth of this conference was the creation of the Common Strategic Framework and Action Plan, which details areas of cooperation and agreement for permanent strategic dialogue in aviation matters. In the area of aviation safety, the main goals are to (1) significantly reduce accident rates in Africa, (2) reduce the average rates of nonconformity of African states for compliance with ICAO standards and recommended practices, and (3) reduce the number of African airlines affected by the EU list of banned airlines. World Bank investments address aviation infrastructure challenges in Africa. The World Bank and the Group of Eight, or G-8, countries have been focusing their efforts on the continent to support economic development in African countries, with goals beyond humanitarian relief, and promoting development across Africa has become a global security issue. The World Bank spends about $600 million annually on aviation projects in Africa. Much of this funding is used for specific infrastructure improvement projects, such as runway construction and air traffic control improvements. For example, in 2007, the World Bank provided international development grants of about $151 million for 23 countries for the ongoing development of a regional air transport program, including about $47 million to Nigeria to help finance the modernization of safety oversight bodies and airport facilities. World Food Program implemented requirements for contracting with African air carriers. The World Food Program implemented an aviation safety program in 2004, which consists of registering, evaluating, and monitoring contract air carriers used to carry out its humanitarian efforts. The program was developed in response to a series of fatal crashes in Africa involving World Food Program personnel. According to World Food Program officials, the safety program holds contractors to high standards and has helped to improve the safety practices of small African air carriers. AviAssist Foundation provides assistance to African countries to improve aviation safety. The AviAssist Foundation identifies safety deficiencies, analyzes their causes, and works with African countries to find practical solutions and secure funding for making necessary improvements. AviAssist also works to promote aviation safety through training events, workshops, and outreach. For example, AviAssist conducted an information session for government and aviation personnel in Zambia in November 2008 to help them prepare for their upcoming ICAO audit. In addition, AviAssist is working with the Flight Safety Foundation to develop plain-language informational documents on countries’ international responsibility for aviation safety and the role of a civil aviation authority. According to AviAssist officials, such information is needed to help increase political leaders’ awareness of the importance of aviation safety. A little more than 10 years have passed since the SSFA program was launched in an attempt to bridge the United States and Africa via air transport by assisting African countries in improving aviation safety. U.S. and African officials attribute important safety advancements in Africa over this period of time—such as the establishment of a regional regulatory organization in East Africa—directly to this program. The program is also of strategic importance to DOT, helping it reach out to the international community and increasing global connectivity. Furthermore, the program has been considered strategically important to U.S. foreign policy interests. However, funding for the program has been inconsistent, and the future of the SSFA program is uncertain because of resource constraints. Given this uncertainty, it seems appropriate for DOT, FAA, and the Department of State to reassess the government’s ability to achieve the program’s goals in view of the level of resources being provided. In addition, better interagency coordination through DOT for funding air transportation-related activities in Africa would improve U.S. efforts to assist African countries not only by preventing duplication of effort, but also by establishing a more comprehensive strategy for achieving common goals and objectives. Several U.S. federal agencies are involved in funding aviation-related projects in African countries, but this assistance is inconsistently coordinated. Such lack of coordination can lead to duplication of effort and the potential allocation of scarce resources for unnecessary and unwarranted projects. It also can prevent agencies from leveraging resources and expertise across government and optimizing the impact of their efforts. While DOT has been involved in some of these aviation safety-related projects, the federal agencies have not collaborated consistently, partly because the other agencies do not focus specifically on improving aviation safety. The Interagency Committee on International Aviation Safety and Security, formed by USTDA and FAA, could potentially serve as a mechanism for developing a strategy to coordinate agencies’ resources for aviation-related projects in Africa and to assist DOT in accomplishing the SSFA program’s goals. By leading collaborative efforts, DOT can share expertise and provide strategic direction for aviation projects in Africa, especially through the SSFA program, helping to ensure that the U.S. agency with the greatest aviation expertise and technical capabilities has a leadership role in activities related to U.S. funding of aviation safety-related efforts in Africa. Furthermore, by encouraging coordination and collaboration, DOT may be able to work with all agencies involved to more consistently focus cumulative efforts on deliverable targets, leverage resources, and achieve tangible results. We recommend that the Secretary of Transportation take the following two actions: Lead a collaborative effort with the Administrator of FAA and the Secretary of State to reassess the SSFA program’s goals and identify the level of budgetary and human capital resources necessary to achieve those goals, including identifying the implications of reduced resource levels on DOT’s ability to achieve the program’s goals. Develop a comprehensive strategy to lead efforts to coordinate the governmentwide resources available to accomplish the SSFA program’s goals. We provided a draft of this report to DOT, the State Department, DOD, USAID, USTDA, and MCC for review and comment. DOT and USTDA generally agreed with the report’s findings, conclusions, and recommendations and provided technical clarifications, which we incorporated, as appropriate. Based on DOT’s comments, we clarified the intent of the recommendations to provide a better focus on the desired results to be achieved. MCC provided clarifications to information related to MCC in the report, which we incorporated as appropriate, and MCC’s review provided no opinion on the larger content of the report, including its findings, conclusions, or recommendations. The State Department, DOD, and USAID did not comment on the report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies of this report to other interested congressional committees and members; the Secretary of Transportation; the Secretary of State; the Secretary of Defense; the Administrator, U.S. Agency for International Development; the Director, U.S. Trade and Development Agency; the Chief Executive Officer, Millennium Challenge Corporation; the Director, Office of Management and Budget; and others. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-2834 or dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix II. To address our objectives, we reviewed and synthesized reports and studies on U.S. efforts to improve aviation safety in Africa, the Department of Transportation’s (DOT) Safe Skies for Africa (SSFA) program, and Africa’s aviation markets and safety records. Specifically, we reviewed GAO and Congressional Research Service reports that included general background information on a variety of related issues on the African continent, such as the safety and security of foreign airports and U.S. airlines’ code-share partnerships with foreign carriers. We searched databases, such as ProQuest, Nexis, and TRIS, for information on the SSFA program, U.S. trade and investment interests in Africa, challenges to improving aviation safety in Africa, and comparable international aviation efforts. Furthermore, we reviewed SSFA program documentation that included information on the program’s selection criteria, eligibility, objectives and goals, accomplishments, and funding information. We also reviewed the DOT Strategic Plan for Fiscal Years 2006 – 2011, the Federal Aviation Administration’s (FAA) Flight Plan 2008 – 2012, and FAA’s International Aviation Business Plan Fiscal Year 2009. In addition, we reviewed FAA guidance on the agency’s International Aviation Safety Assessment and Code-Share Safety programs. We also reviewed documentation and reports on efforts to improve aviation safety in African countries from other U.S. agencies, such as the Department of State, Millennium Challenge Corporation, and U.S. Trade and Development Agency. For U.S. trade and investment policies for Africa, we reviewed reports from the U.S. Trade Representative on the implementation of the African Growth and Opportunity Act of 2000. To examine international efforts to improve aviation safety in Africa, we reviewed published reports, documentation, and regulations from the European Commission on its list of banned carriers, and aviation safety plans, reports, and flight statistics from the International Civil Aviation Organization (ICAO) and International Air Transport Association (IATA). In addition to reviewing program documentation and published literature, we conducted semistructured interviews with department-level officials from U.S. federal agencies and representatives of international organizations, trade group associations, and other industry stakeholders involved with aviation safety issues in Africa. A list of these agencies and organizations follows: Department of Transportation Federal Aviation Administration Department of Defense Department of State Millennium Challenge Corporation National Transportation Safety Board U.S. Agency for International Development U.S. Trade and Development Agency U.S. Trade Representative for Africa International Civil Aviation Organization World Bank European Commission European Aviation Safety Agency MacArthur Foundation Air Transport Association American Association of Airport Executives Flight Safety Foundation International Air Transport Association International Federation of Air Traffic Controllers Association Airbus Boeing Continental Airlines Delta Airlines Honeywell International, Inc. To obtain additional information on aviation safety efforts in Africa, we conducted site visits to four selected African countries. To identify the African countries to visit, we reviewed published research on U.S. efforts to improve aviation safety in Africa, comparable international efforts, Africa’s aviation markets and safety record, and DOT and FAA documentation on the SSFA program. We used the following criteria to ensure variation in the countries chosen for site visits: (1) countries’ participation in the SSFA program; (2) countries that have an FAA Category 1 rating, currently have direct flights to the United States, and are not currently participating in the SSFA program; (3) countries that have achieved an FAA Category 1 rating as a result of the SSFA program; (4) countries that are not involved with the SSFA program or do not have an FAA Category 1 rating, and have major challenges and a poor safety record for aviation safety, with consideration to geographic dispersion; and (5) countries that are involved in positive efforts to meet international aviation safety standards and improve aviation safety as a result of the SSFA program, such as countries that have been involved in regional aviation safety oversight organizations to improve air transport and aviation safety. In addition, we considered other factors, such as recommendations from U.S. government officials and aviation experts whom we consulted about countries to visit based on their knowledge and experience working with African countries and their professional judgment. Using this information, we selected Cape Verde, Kenya, Senegal, and Tanzania for site visits. During the site visits, we conducted semistructured interviews with government officials, including those at the Ministry level, as well as civil aviation authority and airport authority officials; and representatives of regional aviation organizations, African airlines, industry groups, and aviation training schools. However, because these four countries were selected as part of a nonprobability sample, the findings from our interviews cannot be generalized to all African countries. A list of the organizations we contacted in each country follows: Cape Verde Ministry of Infrastructures, Transport, and the Sea Cape Verde Agency for Civil Aviation Cape Verde Airport and Air Navigation Authority Cabo Verde TACV Airlines Halcyon Air (airline) U.S. Embassy, Cape Verde ALS Ltd. (airline) East African School of Aviation International Air Transport Association, Eastern Africa Office International Civil Aviation Organization, Eastern and Southern African Office Kenya Airports Authority Kenya Airways (airline) Kenya Civil Aviation Authority Kenya Ministry of Transport United Nations World Food Program U.S. Embassy, Kenya African Civil Aviation Commission Agency for Air Navigation Safety in Africa and Madagascar Air Senegal International (airline) Federal Aviation Administration, Regional Office for Africa High Authority Airport Leopold Sedar Senghor International Air Transport Association, Central and West Africa Office International Civil Aviation Organization, Western and Central Africa Office National Civil Aviation Agency of Senegal U.S. Embassy, Senegal East African Community (EAC) EAC Civil Aviation Safety and Security Oversight Agency Civil Aviation Training Centre Tanzania Airports Authority Tanzania Civil Aviation Authority Tanzania Ministry of Infrastructure Development We conducted this performance audit from April 2008 through June 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, the following individuals made important contributions to this report, Nikki Clowers, Acting Director; Vashun Cole; Elizabeth Eisenstadt; Hannah Laufe; Nitin Rao; and Amy Rosewarne.
The African continent is important to U.S. economic, strategic, and foreign policy interests, and efforts have been made to improve commerce and connectivity to benefit the two regions. However, the continent has the highest aviation accident rate in the world, which has hindered progress. Recognizing the importance of improving aviation safety in Africa, the United States and the international aviation community have worked to improve aviation safety in Africa. This congressionally requested report discusses (1) challenges in improving aviation safety in Africa, (2) key U.S. efforts to improve aviation safety in Africa and the extent to which they address the identified challenges, and (3) international efforts to improve aviation safety in Africa. To address these issues, GAO synthesized literature and aviation safety data, interviewed federal officials, and visited four African countries. Improving aviation safety in Africa is an important goal for the United States and the international aviation community. However, achieving that goal presents several challenges. The major challenge is the relatively low priority that political leaders in many African countries have accorded aviation safety, in part because of more pressing concerns such as widespread poverty, national health care issues, and a lack of awareness about the potential benefits of an improved aviation system. This relatively low priority placed on improving safety is reflected in the other challenges that were frequently identified in the literature GAO reviewed and by the officials GAO interviewed. These challenges include weak regulatory systems, inadequate infrastructure, and a lack of technical expertise and training capacity. U.S. assistance to improve aviation safety in Africa has helped to address some challenges. For instance, the Department of Transportation's (DOT) Safe Skies for Africa (SSFA) program--created in 1998 as a presidential initiative--is the principal U.S. effort to improve aviation safety. One of the primary goals of the SSFA program is to increase the number of African countries that meet international aviation safety standards. Through memorandums of agreement, the State Department provides funding for the program and DOT manages the program. DOT and the Federal Aviation Administration work to help African countries meet international aviation safety standards by providing technical assistance and training. However, funding for the program has been inconsistent since its inception, with funding levels ranging from a high of $8.5 million from the Department of State's fiscal year 2003 appropriation to zero from its appropriations in fiscal years 2008 and 2009. DOT officials stated that current budgetary and personnel limitations hamper their ability to effectively implement the program. For example, DOT has currently limited SSFA activities to countries making tangible progress in improving safety, rather than directing activities to all participating countries. Given the potential benefits associated with improved aviation systems, two agencies that focus on economic development--the U.S. Trade and Development Agency and the Millennium Challenge Corporation--have also provided funding for aviation safety-related projects in Africa. However, coordination of U.S. efforts on the continent has not been consistent, because of differences in agency missions and program processes, resulting in potential duplication of effort and missed opportunities to leverage limited resources. Several international efforts have been implemented to assist and encourage African countries in improving their civil aviation systems. For example, in response to widespread concerns about the adequacy of aviation safety oversight on the continent, the International Civil Aviation Organization developed the Comprehensive Regional Implementation Plan for Aviation Safety in Africa to help African countries meet their international obligations for safety oversight. The World Bank also provides funding for African countries to address aviation needs and deficiencies.
The Ranger Training Brigade, under the command of the U.S. Training and Doctrine Command (TRADOC) and the U.S. Army Infantry Center at Fort Benning, Georgia, conducts training to develop student skills in infantry, airborne, air assault, platoon, mountaineering, and waterborne operations. The initial training phase, conducted by the 4th Ranger Training Battalion at Fort Benning, focuses on basic Ranger skills. The second phase consists of training by the 5th Ranger Training Battalion in the Georgia mountains, and the third phase is conducted by the 6th Ranger Training Battalion in the swamps of Florida. The course is conducted in difficult terrain under mental and physical stresses, including nutritional and sleep deprivation, that are intended to approach those found in combat. Ranger and other kinds of high-risk military training are dangerous by their very nature. Since 1952, 56 Ranger students have died, 7 of hypothermia. According to the Army’s accident investigation report, the four casualties of February 15, 1995, occurred during what was expected to be a relatively easy exercise involving paddling boats 8 to 10 kilometers down the Yellow River, identifying a preplanned drop-off site, and navigating on foot about 1 kilometer through a swamp to an ambush site. The instructors were largely unaware of rising water levels in the swamp due to heavy rains upriver in Alabama and allowed the students to move into unfamiliar areas. The platoons encountered delays in evacuation and medical assistance, and the students were intermittently immersed in cold, deep water for over 6 hours. The Army investigation recommended corrective actions to improve the systems the instructors use to predict and monitor swamp conditions, revise command and control procedures, and increase evacuation and medical support capabilities. The investigation also raised questions about how best to preserve lessons learned and corrective actions instituted, how to mitigate high turnover and shortages of officers, and who should fulfill the role of safety officer. Corrective actions to improve the safety of Ranger training were also prescribed by the Fiscal Year 1996 National Defense Authorization Act. First, the act required the Army to staff the Brigade at 90 percent of requirements. Such requirements are defined by the Army as the minimum number of personnel a unit needs to perform its mission effectively. This mandate is to be continued for 2 years. Second, the act required the Army to establish at each of the three Ranger training locations an organization known as a “safety cell”, comprising individuals with the continuity and experience in each geographical area needed to advise the officers in charge of the potential impact of weather and other conditions on training safety. Since the late 1980s, Army safety policy has required that commanders at all levels accept primary responsibility for integrating safety risk management in daily operations at the unit level. External oversight is provided by the Director of Army Safety, safety offices at major Army commands and installations, and the Army Inspector General. The Ranger Training Brigade has completed action on 38 of the 41 (93 percent) recommendations designed to improve training safety. The remaining three recommendations, involving increases in personnel and a Secretary of the Army-directed follow-up review of safety improvements, are expected to be completed by September 1997. Most of the recommendations were focused on improving (1) risk assessments of training conditions, (2) command and control of exercises, and (3) evacuation and medical support. All three training battalions have updated their overall assessments of training risks. For example, the 6th Battalion in Florida worked with the National Oceanic and Atmospheric Administration and the U.S. Geological Survey to develop detailed information on terrain, water, and tidal patterns to better understand their impact on training. The 6th Battalion also developed procedures to obtain river level and weather information from local emergency forecasting organizations and incorporated reviews of those risks in daily instructor briefings. Water depth markers and electronic weather sensors were installed along the Yellow River to measure water depth and temperature, air temperature, and humidity readings. In 1995, primitive water level markers, such as painted marks on a bridge and trees, were in place but provided no common scale to judge water depths along training routes. The Battalion also updated its water immersion safety guidelines to reduce student exposure time in water waist deep from 3 to 7 hours to 2 to 3.5 hours, when air or water temperature is in the 55 to 64 degree range. The Army’s November 1995 review of the existing guidelines found that soldiers who had just completed the course had a core body temperature about 2 degrees lower than normal soldiers and would thus reach hypothermic conditions quicker than previously believed. The 6th Battalion completed a comprehensive standard operating procedure revision in December 1995 that references all training-related guidance, identifies key leader responsibilities, and defines the decision-making process to be used when conditions deteriorate to higher risk levels. The revised procedure includes adjustments to training routes to avoid the most hazardous areas and the elimination of student discretion to miss planned landing sites and choose their own. Comprehensive procedures for the other training locations are also being prepared. According to the Army’s investigation, at the time of the accident, written procedures were outdated and were disseminated throughout a variety of instructions. As new cadre were assigned to the Battalion during the normal personnel rotation process, training procedures were changed both formally and informally. On the day of the accident, water at the planned drop-off site was too deep for the students to disembark from their boats. While one student platoon chose to abandon the swamp movement and suffered no casualties, the other two platoons were allowed to continue downriver and select an unplanned landing site. Moving to an unplanned landing site introduced many uncontrolled variables into the exercise, such as water depth, underwater obstacles, currents from underwater streams, and unfamiliar ground, the Army’s investigation report said. The platoons quickly encountered water waist to neck deep, but the instructors moved ahead, believing that the water would get shallower and the platoon would have a short move to higher ground. However, they continued to encounter deep water obstacles and within 1 hour students began to enter the early stages of hypothermia. The Brigade also developed a standardized, written instructor certification program covering all battalions. Instruction is provided at each battalion in areas such as training techniques and safety controls, emergency procedures and contingency plans, and combat lifesaving techniques. Emphasis is placed on a step-by-step progression from basic instructor up to principal instructor, and personnel must be certified at each level before serving in that capacity. According to Brigade officials, the program increased the time required for certification from about 1.5 to 4 months. The Brigade has generally completed a $1.1 million communications system upgrade to improve communications at both the 6th Battalion and the 5th Battalion in the Georgia mountains. The upgrade will connect virtually all cadre participating in Florida exercises directly with one another. Inadequate emergency communications slowed reaction times during the accident, as well as the ability of the cadre to know what was happening as conditions deteriorated. The Florida camp has now revised and rehearsed air, water, and ground evacuation plans, and mass casualty and joint evacuation procedures with local medical services. According to Army officials and the investigation report, at the time of the accident, the camp had not documented preplanned surface evacuation routes and extraction points or standard operating procedures for handling mass casualties, and surface evacuation was not considered until late in the accident. The camp has also obtained two new medevac helicopters, with more cargo capacity and speed than their predecessor, and aircraft fuel in a 2,000-gallon tanker is now available at the camp. Although the camp’s only medevac helicopter responded quickly to the accident, bad weather and the lack of a refueling truck at the Florida camp delayed its second evacuation run by over 2 hours. Full-time medics have also been assigned to the Brigade. Many of these medics are Ranger-qualified and routinely walk on patrol with the students. The Brigade was not previously authorized to have its own medics, and difficulties were encountered during the accident because the borrowed medics were not trained in some of the techniques used during the evacuations. Additional key corrective actions are discussed in the following sections. The complete status of all corrective actions is included in appendixes I through V. If the Army is to sustain the key corrective actions instituted after the accident in the future, it must institutionalize them. One important way to achieve this objective is to expand the focus of formal Army inspections to include testing or observing the key safety controls to determine whether they are working effectively. Neither formal Army Safety Program inspections, required to be conducted annually by installation safety offices, nor formal Army Infantry Center command inspections were conducted at the Florida camp during the 2 years prior to the Ranger student deaths. Even if such safety inspections had been conducted, it is not likely that they would have identified the erosion in safety controls because the inspections were focused on procedural issues such as whether accidents are reported. Army officials told us that less formal reviews of Ranger Training Brigade operations were conducted by a variety of Army organizations both before and after the accident. However, we found little or no documented record of safety control inspections. Although important, these informal inspections cannot substitute for documented safety reviews in sustaining safety improvements over time. According to Brigade and other Army officials, there are two basic keys to ensuring that safety controls operate as intended over time in an environment of rapid personnel turnover. First, controls must be clearly institutionalized in written operating procedures. Second, leaders must visit training sites frequently and observe operations to ensure that the safety controls are followed. At the time of the accident, many of the important lessons about safety controls that had been built up over the years by personnel assigned to the Florida training site were not in written form and had been lost over time. For example, according to Brigade officials, at least until 1991 student platoons were not allowed to miss planned drop sites and pick their own routes through the swamp. Similarly, the Army investigation following the 1977 hypothermia deaths of two students recommended that an on-site refueling capability for medevac helicopters be made available at the Florida camp. However, these and other key safety measures were either not institutionalized or simply atrophied over time. As shown in figure 1, a variety of organizations have exercised oversight over Ranger Training Brigade safety. Army officials told us that representatives from these organizations visited the Brigade a number of times, both before and after the accident. However, we found little or no documented record of safety control inspections during these visits. Although safety inspections are required at least once each year under the Army Safety Program, the Fort Benning Installation Safety Office conducted no inspections of training operations safety at the Brigade or its battalions between March 1993 and March 1996. Moreover, Fort Benning Safety Office officials acknowledge that even if the required inspections had been performed before the 1995 accident, it is not likely that they would have identified the erosion in safety controls. Formal inspections by the Safety Office under the Army Safety Program comprise checklists focused on procedural issues, such as whether accidents are reported and files of safety regulations and risk assessments are maintained. The Army’s process for identifying and controlling hazards in training operations is termed risk management. This program consists of a formal five-step process of (1) identifying training and other hazards, (2) assessing the magnitude of each risk, (3) making risk decisions and developing controls, (4) implementing the controls, and (5) supervising and enforcing the controls. Although the process requires units to identify safety controls as part of written training risk assessments, the controls considered most important by the unit are not identified. And, as illustrated in table 1, formal inspections by the installation Safety Office and the Brigade do not include requirements for testing or observation to determine whether the more important safety controls are working effectively. Examples of important safety controls are testing instructors’ adherence to the rules requiring them to walk planned swamp routes before each exercise and prohibiting deviations from planned swamp training routes. Safety office inspection responsibility includes a wide range of activities, including Occupational Safety and Health Act standards, ammunition and explosives operations and storage, and military training operations. According to Fort Benning installation Safety Office officials, they have not had the financial or personnel resources to inspect units as frequently as required. Since 1991, Safety Office personnel have been reduced from 13 to 8. In 1993, the Army Inspector General found that resource constraints were impacting installation safety offices’ ability to fulfill their required safety responsibilities. The report concluded that when commanders were forced to make difficult resourcing decisions, safety officers often had difficulty competing for resources because of their orientation toward prevention. At that time the average percentage of assigned personnel in installation safety offices was 67 percent of requirements. Under the Army’s command and staff inspection program, individual units are also responsible for conducting periodic inspections of their subordinate commands’ operations. However, the Army Infantry Center did not conduct a formal command inspection of the Brigade for over 22 months prior to the accident. Similarly, the Brigade did not conduct a formal command inspection of the Florida camp’s operations for over 2 years prior to the accident. Army inspection policy provides commanders flexibility to establish both the frequency and criteria for the inspections, with guidance from their major commands. Command inspections by the Infantry Center, and the Brigade in turn, cover a broad range of unit activities, including safety. However, these formal inspections use the same safety item checklist as the installation Safety Office, which is focused on procedural matters and does not evaluate the operation of important training safety controls. The manager of Fort Benning’s installation Safety Office told us that, without clear identification of the most important training safety controls, his office does not have the expertise for in-depth assessments of compliance. However, not all safety controls have been documented by the battalions, and the most important controls have not been highlighted to provide the foundation needed for effective external inspections. For example, at one battalion the minimum evacuation resources needed to conduct training safely were not identified. Some of these requirements, such as having two ambulances available before certain dangerous exercises can be conducted, were included in medics’ personal documents—but not in battalion operating procedures. The 6th Ranger Training Battalion has improved its daily oversight of training safety by reinstating controls lost over the years, documenting many of them, and ensuring that they are followed. For example, instructors are now required to walk the planned training route through the swamp the morning of each exercise. A variety of safety controls are included throughout internal training risk assessments, individual training exercise procedures, and draft training operating procedures. These controls are enforced as part of the instructors’ daily supervision of training, and compliance is generally documented in daily operations logs, after-action reports, and other internal operations documents. The Brigade has inspected each training battalion and instituted a written policy of monthly visits by the Commander or other key leaders to ensure that safety controls are adequate and executed as intended. The Infantry Center Commander’s approval is now required before any reduction can be made in the safety controls in place at the Brigade and its battalions. The Secretary of the Army has also directed a follow-up review of safety procedures at the school, currently scheduled for September 1997. In addition, according to Army Inspector General officials, the Secretary has asked their office to conduct periodic reviews of the Brigade, as well as other high-risk training units. The Army plans to staff the Ranger Training Brigade at the required 90-percent level by February 1997 and submitted its plan for doing so to Congress in November 1996. To meet the law’s requirement, the Army placed the Brigade on the list of units excepted from normal Army staffing priorities and raised the unit’s priority to the highest level. The plan also requires quarterly reports to ensure that the required staffing levels are maintained. The Army’s investigation of the 1995 accident concluded that officer shortages and personnel turnover contributed to the accident by draining the experience and insight of the 6th Battalion and by limiting its ability to keep operating procedures current, supervise standards and policies, and allow officers to accompany and observe field training exercises. At the time of the accident, the Florida camp had 8 of the 11 authorized officers, but only 32 percent (8 of 25) of the required officers. In addition, 42 percent (44 of 106) of the instructors were assigned only during the last year before the accident. According to officials at the Army Infantry Center, they attempt to limit turnover to about 33 percent of unit personnel each year. As shown in table 2, enlisted personnel have been assigned to the Brigade at levels close to or above those mandated for years. Army policy gives staffing of enlisted personnel at the school priority over other units. However, until November 1996, staffing for Ranger Training Brigade officers did not receive Army priority and averaged about 36 percent of required levels between 1994-96. As of October 1996, officer staffing had been increased to 88 percent of required levels. Department of Defense officials told us that raising the Brigade’s staffing priority to the highest level would also significantly reduce the difficulties it faced in competing for personnel resources and sustaining high staffing levels. The Brigade Commander assigned at the time of the accident told us that the unit needed about 50 officers to function safely and effectively. Staffing the Brigade at the required 90-percent level would increase the number of Brigade officers to 58, or 20 more than at the time of the accident. Despite the low percentage of civilian staffing, the Brigade Commander believed that the current number of civilian staff was adequate. According to Army Infantry Center officials, the Center attempts to manage turnover of key Brigade personnel through quarterly reviews of upcoming officer changes. The Commanding General reviews all rotations at the rank of major and above. These reviews have been a continuous process over the years, but have received increased emphasis since the accident. During 1996, turnover of key leaders (commanders, executive officers, operations officers, and command sergeant majors) at each battalion was halted during the high-risk winter training months. However, the near-simultaneous replacement of the Brigade commander, executive officer, and command sergeant major during the spring and summer raised concerns at the Brigade. Officer shortages, such as those experienced by the Ranger Training Brigade, are not unique. Our June 1995 report on the drawdown of military personnel found that most Army positions were kept filled at high rates during the early 1990s. However, certain specialties and ranks, particularly field grade officers (majors, lieutenant colonels, and colonels) were in short supply. According to Army officials, field grade officers, as well as branch-qualified captains, continue in short supply today. For example, in 1997 the Army is expected to operate with about 1,200 fewer branch-qualified captains, 3,200 fewer majors, and 1,000 fewer lieutenant colonels than the nearly 24,000 authorized in force structure documents. Army policy is that units that are first to fight are first to be resourced. However, available officers are limited first by Army-wide shortages, and then by legislative and other requirements such as giving priority to joint duty assignments, duty as advisers to reserve units, and other special considerations. In 1997, for example, the Army expects about 40,000 officers to be available for assignment. For fiscal year 1997, about 3,000 officers were authorized for joint duty positions, 1,600 for duty as advisers to the reserves, and another 1,900 for acquisition positions. Following satisfaction of these initial priorities, allocations flow down through major commands such as TRADOC, to subordinate commands like the Army Infantry Center, and on to individual units. Each level may add its own priorities, further limiting the number of officers available to lower priority units. For example, in 1996 TRADOC, a noncombatant command, received 73 percent of its authorization for branch-qualified captains through colonels, while the program providing advisers to reserve units received 104 percent. The Infantry Center then spread the officers allocated by TRADOC in accordance with Army-wide, TRADOC, and local priorities, including emphasis on all its high-risk training units. The officers remaining allowed a fill rate at the Ranger Training Brigade of only about 85 percent of the authorized level, 42 percent of requirements. Our analysis of allocations between 1991-97 found the Brigade’s experience to be similar to that of other units at the Center. According to Army officials, officers are being diverted from duty at such units as the National Training Center, Joint Readiness Training Center, and Battle Command Training Program to provide the mandated increase in staffing at the Brigade. Brigade officials believe the school needs about 624 enlisted personnel to operate safely and effectively. This number equates to about 112 percent of current requirements, or 68 enlisted soldiers more than assigned in October 1996. The extra personnel requested are based on studies of the Brigade conducted in 1994 and 1995. On the basis of these studies, the Brigade also called for a restructuring of staffing models for the unit. Brigade officials believe that current staffing models are outdated and do not accurately reflect the need for medical, boat safety, air operations, and other general support personnel. The Brigade has diverted enlisted instructors to fill these shortages. According to Brigade officials, enlisted staffing would be sufficient if it were not for the drain caused by the lack of support personnel. Army-wide, enlisted duty positions such as recruiters, service school instructors, the operations group at the National Training Center, and certain schools such as the Brigade, Joint Readiness Training Center, and Special Warfare Center receive priority and are staffed at about 98 to 105 percent of authorizations. TRADOC has been studying the issues raised by the Brigade in schools across the command since early 1996, and officials expect the studies to be completed by April 1997. According to TRADOC and Army Safety Center officials, recognition of the high rate of accidental deaths and injuries has increased the emphasis on risk management in the Army. TRADOC currently is rewriting combat doctrine to recognize risk management and better integrate it into Army culture and decision-making. Currently, however, the Army has no formal criteria to identify units considered to be high risk and serve as a framework for allocating personnel or other resource priorities to them. Following the death of a Navy recruit during rescue swimmer training in 1988, TRADOC conducted a study of high-risk/high-stress training (High-Risk/High-Stress Training Special Study, April 1, 1989). The study developed a definition of high-risk/high-stress training and identified a list of 92 courses categorized as inherently dangerous, including the course conducted by the Ranger Training Brigade. Similarly, the deaths of the Ranger students in 1995 spurred an ongoing review of high-risk training by the Army Inspector General (Special Assessment of High Intensity Training). The first phase of this review also developed a definition and identified a group of high-risk units. However, according to TRADOC and Inspector General officials, neither definition has been formally adopted by the Army. We asked the Army Safety Center to provide information identifying units that have had the most training deaths and serious accidents over the past 10 years. However, according to Center officials, this information is not readily available because of difficulties in aggregating data at levels below installations, changes in reporting formats over time, and the sheer number of units involved. Statistics such as those involving safety can be difficult to interpret because of behavioral and other variables. For example, some units may have superior safety programs, but higher rates of accidents due to higher levels of inherent risk in their activities. Currently, members of the Ranger Training Brigade and battalion chains of command serve as the safety cell organization established pursuant to the 1996 act. Although there is a higher level of attention to safety, for the most part, the safety cell organization established is no change from the oversight practice that was in place at the time of the accident. At the close of our review, however, the Infantry Center and Brigade were considering requesting additional personnel to serve as full-time safety cell members. The act required the Army to establish an organizational entity known as a safety cell at each of the three phases of Ranger training, ensure that safety cell personnel at each location have sufficient continuity and experience in that area to understand local conditions and their potential effect on training safety, and assign sufficient numbers of safety cell personnel to serve as advisers to the officers in charge at each location in making daily “go” and “no-go” decisions on training. The act, however, did not establish specific criteria to guide decisions on the makeup of a safety cell. The Ranger Training Brigade established its safety organization consistent with past operations and existing Army policy. The battalion commanders were named as safety officers, with dual responsibility for training operations and training safety. The Brigade Commander is the overall safety officer. Operations sergeants at each battalion were designated as assistant safety officers. The Brigade Commander also named each battalion command sergeant major, operations sergeant, and the primary instructor overseeing each day’s exercise as part-time safety cell members. The Brigade Commander chose these personnel because the personnel in those positions generally have a relatively high degree of experience and knowledge of the area and close involvement in supervising and monitoring operations. Even so, we noted that the personnel in these positions have limited continuity and experience in the local training areas. For example, the Brigade and battalion commanders normally rotate to new units every 2 years and enlisted personnel every 3 to 3.5 years. At the time of our visits, the safety cell members had on average, 2.5 and 4.4 years of experience at the 6th Battalion in Florida and 5th Battalion in the Georgia mountains, respectively, including time from prior tours of duty. In comparison, a civilian training specialist at the Brigade has been employed continuously for 11 years. The Brigade has a higher level of attention to safety than in the past. For example, the 6th Battalion Commander walks the planned route for swamp training the day before each exercise. However, according to battalion officials, the personnel and duties of the safety cell members are not markedly different than those of safety officers in the past. The battalion commander, command sergeant major, principal instructor, and operations sergeant/officer were also responsible for overseeing safety in past years. The Brigade’s approach makes no provision for expert advice from outside the chain of command. According to the Brigade Commander at the time of the accident, ideally, the safety cells should be staffed with civilians with long-term continuity. However, budget constraints made the hiring of civilians impractical. The specific duties and identity of the safety cell members are now defined in the draft Brigade operating procedures, unlike at the time of the accident. However, they have not been incorporated into written battalion procedures. We also noted that safety cell members in the Brigade are not required to undertake any special training for their duties. Safety cell members at the 6th Battalion were given the 4-hour Fort Benning assistant safety officer course following the 1995 accident. However, in contrast, safety officers in Army aviation units must take a 6-week safety course. Since the late 1980s, Army policy has placed responsibility for safety in each unit’s chain of command. The unit commander is the safety officer, fulfilling dual responsibilities for mission completion as well as safe operations. Unit commanders may appoint additional personnel at lower echelons to serve as part-time assistant safety officers in addition to their normal unit duties. According to the Director of Army Safety, this doctrine was adopted at a time when accident rates were at high levels and responsibility for safety was largely considered to be the province of agencies external to the units. The new doctrine sought to make commanders primarily responsible for safety and to use risk management techniques to help identify and reduce unnecessary risks. Late in our review, the Brigade’s approach to the safety cells was reviewed by the new Brigade Commander and the new Commander of the Army Infantry Center. Because of the need for long-term continuity and other considerations, the Infantry Center and Brigade are considering requesting that four civilian and seven military personnel be added to the Brigade’s authorized personnel to serve as safety cell members. The request would authorize one civilian and one military position at the Brigade and one civilian and two military positions at each battalion to handle the 24-hour training operations at the camps and the possibility of temporary absences of safety cell members. Our discussions with the Army Safety Center, TRADOC, and the Army Infantry Center identified a number of pros and cons with the use of civilians as full-time safety officers. A safety cell made up of civilians would provide a clear and highly visible professional advocate for safety with long-term continuity and experience at training locations. This approach also provides a measure of protection against commanders who may overzealously pursue mission accomplishment to the unnecessary detriment of safety. However, the use of civilians also includes some potential for undermining the unit chain of command and diluting commanders’ feelings of personal responsibility for safety. TRADOC and other Army officials also raised concerns about a lack of experience in military plans and operations that could limit the effectiveness of civilians working in military units. This potential could be addressed by hiring retired Ranger instructors or other appropriate military retirees. Cost is also a significant concern. According to TRADOC officials, authorizing additional personnel on the basis of safety considerations raises questions about the desirability and affordability of expanding this concept to other dangerous training activities. The Ranger Training Brigade estimated that each civilian would cost about $30,000-$39,000 annually. Authorizing TRADOC’s 1989 list of 92 high-risk schools with an average of 2 personnel each would require about 200 additional civilians. Alternatively, existing military personnel could be used in place of civilians. The advantages of this approach include the same highly visible professional advocate for safety without the increased cost. However, this approach would also represent an additional drain on the Army’s limited pool of officers, without providing increased long-term continuity. In addition, officers we spoke to were concerned, again, that such positions could undermine the unit chain of command as well as commanders’ feelings of personal responsibility for safety. The existing Army Aviation Safety Officer program could serve as a model for this option. Army policy authorizes formal positions for full-time safety officers at each Army aviation unit. Army regulations for the program specifically state that such officers will administer and monitor the overall safety program, including halting unsafe actions, but they have no command authority. There are currently some 900 aviation safety officers in the active Army and reserves. The number of additional military or civilian personnel needed for these options might be reduced by training some of the existing 1,086 safety civilians in technical fields such as occupational health and safety, engineering, and health as unit operations safety personnel. The Army Safety Center is currently restructuring its Total Safety Professional Career Management Program to provide such training. We recommend that the Secretary of the Army direct that the Ranger Training Brigade identify critical training safety controls at each training location; ensure that TRADOC, the Army Infantry Center, Fort Benning safety office, and Ranger Training Brigade conduct periodic inspections to determine compliance with the identified safety controls; and direct that inspections of critical safety controls be made periodically by organizations outside the chain of command such as the Army Inspector General. We are deferring any recommendations on the issues of personnel staffing levels and the appropriate organization of safety cells until we have completed our final evaluation. In written comments on a draft of this report (see app. VI), the Department of Defense said that it generally agreed with our findings and recommendations and has completed or has in progress most of the planned corrective measures. The Department said that the Brigade has identified the critical safety controls and the Secretary of the Army has directed that the chain of command and the Army Inspector General conduct periodic inspections of the Brigade to ensure that the safety controls and corrective actions are effective. We believe that such periodic inspections, together with highly visible support for safety from the Army’s leadership, will be critical to institutionalizing effective safety controls at the Brigade. The Department also noted that its regulations require leaders at all other potentially hazardous training units to integrate risk management safety principles into their training. Nonetheless, difficult long-term policy questions remain regarding the appropriate priority for staffing and other resources to be provided to the Department’s other high-risk training units, as well as the need for safety organizations at such units. To determine the status and implementation of corrective actions taken to improve Ranger training safety, we received briefings from Brigade officials, reviewed reports covering the Army’s investigation of the Ranger students’ deaths, observed each Ranger battalion’s training facilities, interviewed Army investigating officers and Brigade and battalion commanders and instructors, reviewed training safety controls and inspection procedures, and observed the site where the deaths occurred. At our request, the Army Safety Center also conducted a review of the Brigade’s risk management program. We did not review whether the Army’s investigation of the accident was conducted in accordance with regulations. We assessed the ability of safety inspection and oversight procedures to ensure that corrective actions will be sustained in the future through review of Army and Infantry Center regulations and inspection records, and interviewed officials at the Army Inspector General’s Office, Army Safety Center, U.S. Forces Command, Army Special Operations Command, TRADOC, the Fort Benning Safety Office, and the Ranger Training Brigade. To assess progress made toward increasing personnel staffing to legislatively mandated levels, we reviewed and analyzed personnel and policy documents and data to determine staffing priorities, changes in requirements, assignments, student loads, and changes in staffing at the Brigade and other Army Infantry Center units during fiscal years 1994-97. We assessed the progress made toward establishing training safety cells by reviewing records and interviewing Brigade and battalion officials regarding the duties, qualifications, and experience of safety cell members. We also discussed safety cell organizations with the Director of Army Safety, Army Manpower and Reserve Affairs, TRADOC, and Army Infantry Center officials. We conducted our review at Department of Army headquarters, TRADOC, Army Infantry Center, Ranger Training Brigade, the Ranger battalions, and the Army Safety Center. Our review was conducted from April through October 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairmen, Senate and House Committees on Appropriations, Senate Committee on Armed Services, and House Committee on National Security and to the Secretaries of Defense and the Army. Copies will also be made available to others upon request. The major contributors to this report are listed in appendix VII. If you or your staff have questions about this report, please call me on (202) 512-5140. Completed Weather, river, and swamp information obtained from local and federal agencies is integrated in training decision-making. Also, three remote weather sensors on the Yellow River provide real-time water depth and temperatures. Risk management assessments have been completed for all training activities. Daily risk assessments capture information on changing weather, water level, temperature, student conditions and readiness of support systems. On the basis of the Army’s November 1995 reevaluation of the original immersion guidelines, the Ranger Training Brigade lowered the guideline’s water exposure times. 5. Standardize the in-walkers briefing for instructors. Completed Written standardized briefing formats are used for daily briefings of instructors at all three Ranger training battalions. 6. Provide commanders critical requirements analysis of class/platoon strengths and weaknesses as each class moves to a new training phase. Medical and other information on selected students and student platoons is forwarded to each training phase’s incoming commander. The Army Corps of Engineers erected 32 water depth markers along the Yellow River and training lanes in the swamps. 8. Examine the effectiveness of the current buddy system. System reviewed and remains a first line of safety defense. When assigned buddy not available, teams will move to three-person system. 9. Reinstate the system of assigning tactical officers to each class. The 6th Battalion now assigns a captain or senior noncommissioned officer and a staff sergeant to each class with responsibility for class cohesion, student advocacy, feedback to battalion commanders, and other issues. 10. Conduct refresher training on the use of the immersion guide. The water immersion guide is briefed at the beginning of each training day and updated as conditions change. Completed Weak swimmers are identified during the combat water survival test and marked on their headgear and equipment. 12. Obtain physiological monitoring software. Experimental monitoring software was provided to Ranger medical clinics. Due to implementation problems, the Brigade has discontinued its use. 13. Conduct nutrition and immunization study. The Brigade Commander has increased meals provided Ranger students from 1-1/2 to 2 per day based on Army nutritional studies. 14. Develop personnel status monitoring system technology for possible use in Florida. Experimental monitors tested in June 1996, but no procurement made. Procedure for Florida training phase is completed. Rewrites for Brigade and remaining phases are in process. The 6th Battalion identified specific lanes from the Yellow River through the swamps. The lanes were narrowed and adjusted to avoid hazardous areas. Students are no longer allowed to deviate from designated boat drop sites and training lanes. 3. Develop a training and certification program for instructors. The Ranger Training Brigade developed a standardized instructor certification program. The program focuses on the development of instructor competency, experience, and application of procedures, safety, and risk management. 4. Upgrade tactical operations center ability to monitor operations. Communications and computer upgrades installed at Florida and mountain phases. Installation of tower and microwave antennae scheduled for completion in Florida by January 1, 1997. The 6th Battalion acquired whisper mikes for use with Motorola radios during training exercises. 6. Ensure that all patrols are equipped, trained, and prepared to conduct stream crossing operations. 6th Battalion students must demonstrate their ability to properly construct a one-rope bridge in 8 minutes prior to entering the swamp. 7. Develop a decision paper on the use of precision lightweight global-position receivers by instructors during emergencies. A Ranger Training Brigade decision paper concluded that global-position receivers will be used by medical evacuation helicopters and Ranger instructors. The Brigade acquired 66 receivers to track the movement of students. 8. Develop standard packing lists for instructors, medics, and aeromedevac crews. Equipment and supply packing lists for instructors, medics, and aeromedevac crews have been updated. 9. Review the winter rucksack packing list. The winter packing list has been reviewed and minor changes made. Instructors inspect student rucksacks to ensure they have been tailored, weight distributed, and waterproofed. 10. Add a waterproofing class to program of instruction. A waterproofing lesson has been added to the Ranger course program of instruction. Air, water surface, and ground evacuation procedures have been planned and rehearsed. Joint medical evacuation procedures have been established among the Ranger training battalions and local medical services. Mass casualty procedures have been included in each Ranger Training battalion’s standard operating procedure. 3. Initiate a project to build a road into the swamp area in Florida. The 6th Ranger Training Battalion Commander concluded that the road is not critical for safe training and, following an environmental assessment, costly construction and environmental mitigation is not justified. 4. Determine fuel requirement for medevac helicopters at Florida training site. A 2,000-gallon tanker is on hand at the Florida camp and two tankers with about 10,000 gallons fuel capacity are on hand at the mountain camp. 5. Implement plan to revert to full time ranger medic manning. All three Ranger Training Battalions now have full-time, Ranger-qualified medics. The Florida Ranger camp acquired 21 CO inflatable rafts, which are used by each Ranger instructor team. Six hypothermia bags were issued to each of the Ranger training battalions. 8. Develop a system to check packing list for medevac helicopters. All medevac emergency equipment is inspected for accountability and serviceability upon arrival at the training battalions. Fort Benning Medical Command has developed training guidelines for medics and Physician’s Assistants in each camp. 10. Ensure compliance with previous cold weather procedures. Revised standard operating procedures outline cold and hot weather training procedures. 1977 and 1995 accident summaries have been integrated into instructor certification program and are required reading for new members of the chain of command. VCR tape summarizing the 1977 and 1995 accidents was produced and is in use in the instructor certification program. Monument to students who died was erected at the site of the accident. 2. Continue formal command inspection program. All battalions have been inspected, and a senior supervision plan has been instituted, that consists of frequent visits to each training site by Brigade chain of command. 3. Review complete waterborne procedures. Secretary of the Army directed a complete review of safety procedures and improvements now scheduled for completion in September 1997. The Army plans to staff the Brigade at the 90-percent level by early February 1997. 2. Obtain a brigade medical adviser, communications officer, and air operations officer. Increases currently under review in TRADOC. However, additional officers provided under the 1996 legislation may be used for several of these positions. 3. Phase rotation of key personnel to limit turbulence. Army Infantry Center conducts quarterly reviews of all officer rotations to help limit turnover. 4. Establish safety cells at each of the three training school locations to advise the officers in charge, and assist in daily go/no go decisions on training. Brigade personnel named as safety cell members and Infantry Center is considering requesting additional civilian and military personnel. Required by the Fiscal Year 1996 National Defense Authorization Act. John W. Nelson Kevin C. Handley The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Army's investigation of the February 1995 accident in which four Army Ranger Training Brigade students died while training in a Florida swamp, focusing on the: (1) status of all of the Army's corrective actions; (2) adequacy of Army oversight to ensure that the corrective actions instituted after the accident will be sustained in the future; (3) Army's progress in implementing the Fiscal Year 1996 National Defense Authorization Act's mandate to increase Brigade staffing to 90 percent of requirements; and (4) Army's progress in establishing safety cell organizations at the Brigade. GAO found that: (1) the Ranger Training Brigade has completed most of the corrective actions recommended by the Army; (2) the Brigade has improved safety by developing systems to better monitor and predict swamp conditions, and improved command and control by revising its procedures to move training exercises outside high-risk areas of the swamp, eliminate discretion to deviate from planned exercise locations, and incorporate the latest guidance on training safety; (3) evacuation procedures have been revised and rehearsed, new medevac helicopters and refueling capacity have been obtained, and medics have been assigned directly to the Brigade; (4) if the Army is to sustain the key corrective actions taken after the accident in the future, the actions must become institutionalized; (5) if the important corrective actions are to become institutionalized, formal Army inspections will have to be expanded to include testing or observing to determine whether they are working effectively; (6) the Army plans to fully staff the Ranger Training Brigade at the mandated 90-percent level by February 1997; (7) although the Army raised the Brigade's staffing priority subsequent to GAO's field work, high-risk training units generally are not recognized in Army personnel staffing priorities; (8) the Brigade's long-term ability to sustain the required number of officers may be hindered by competition with Army priorities given to units who are first to fight and with other important noncombatant units; (9) currently, members of the Ranger Training Brigade and battalion chains of command serve as the safety cell organization established pursuant to the act; (10) the act did not establish specific criteria to guide decisions on the makeup of a safety cell, and the option chosen by the Army represents little change from the safety oversight practice that was in place at the time of the accident; (11) personnel in these positions have limited experience in the local training areas due to the Army's policy of rotating them to new units every 2 or 3 years; and (12) the Army Infantry Center is considering requesting authorization for additional civilian and military positions to serve as full-time safety cell members.
In October 1990, the Federal Accounting Standards Advisory Board (FASAB) was established by the Secretary of the Treasury, the Director of the Office of Management and Budget (OMB), and the Comptroller General of the United States to consider and recommend accounting standards to address the financial and budgetary information needs of the Congress, executive agencies, and other users of federal financial information. Using a due process and consensus building approach, the nine-member Board, which has since its formation included a member of DOD, recommends accounting standards for the federal government. Once FASAB recommends accounting standards, the Secretary of the Treasury, the Director of OMB, and the Comptroller General decide whether to adopt the recommended standards. If they are adopted, the standards are published as Statements of Federal Financial Accounting Standards (SFFAS) by OMB and GAO. In addition, the Federal Financial Management Improvement Act of 1996 requires federal agencies to implement and maintain financial management systems that will permit the preparation of financial statements that substantially comply with applicable federal accounting standards. Also, the Federal Managers’ Financial Integrity Act of 1982 requires agency heads to evaluate and report annually whether their financial management systems conform to federal accounting standards. Issued on November 30, 1995, and effective for the fiscal years beginning after September 30, 1997, SFFAS No. 6, Accounting for Property, Plant, and Equipment, requires the disclosure of deferred maintenance in agencies’ financial statements. SFFAS No. 6 defines deferred maintenance as “maintenance that was not performed when it should have been or was scheduled to be and which, therefore, is put off or delayed for a future period.” It includes preventive maintenance and normal repairs, but excludes modifications or upgrades that are intended to expand the capacity of an asset. The deferred maintenance standard applies to all property, plant, and equipment, including mission assets—which will be disclosed on the supplementary stewardship report. For the Department of Defense (DOD), mission assets, such as submarines, ships, aircraft, and combat vehicles, is a major category of property, plant, and equipment. In fiscal year 1996, DOD reported over $590 billion in this asset category, of which over $297 billion belonged to the Navy, including 338 active battle force ships such as aircraft carriers, submarines, surface combatants, amphibious ships, combat logistics ships, and support/mine warfare ships. The Navy spent a little over $2 billion on ship depot maintenance for its active fleet in fiscal year 1996. SFFAS No. 6 recognizes that there are many variables in estimating deferred maintenance amounts. For example, the standard acknowledges that determining the condition of the asset is a management function because different conditions might be considered acceptable by different entities or for different items of property, plant, and equipment held by the same entity. Amounts disclosed for deferred maintenance may be measured using condition assessment surveys or life-cycle cost forecasts.Therefore, SFFAS No. 6 provides flexibility for agencies’ management to (1) determine the level of service and condition of the asset that are acceptable, (2) disclose deferred maintenance by major classes of assets, and (3) establish methods to estimate and disclose any material amounts of deferred maintenance. SFFAS No. 6 also has an optional disclosure for distinguishing between critical and noncritical amounts of maintenance needed to return each major class of asset to its acceptable operating condition. If management elects to disclose critical and noncritical amounts, the disclosure must include management’s definition of these categories. The objective of our work was to identify information on specific issues to be considered in developing implementing guidance for disclosing deferred maintenance on ships. We reviewed financial and operational regulations and documentation related to managing and reporting on the ship maintenance process. The documentation we reviewed included fleet spreadsheets used to track depot-level maintenance requirements and execution by specific ship. We also reviewed Navy Comptroller budget documents. We discussed this information with officials at DOD and Navy headquarters and at various organizational levels within the Department of the Navy. While the deferred maintenance standard applies to all levels of maintenance, this report addresses ship depot-level maintenance because it is the most complicated and expensive. (See the following section for a discussion of the Navy ship maintenance process, including the levels of maintenance.) The amounts for deferred depot level maintenance presented in this report were developed using information provided by Navy managers. We did not independently verify the accuracy and completeness of the data. We conducted our review from July 1996 through November 1997 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the Secretary of Defense or his designee. The Under Secretary of Defense (Comptroller) provided us with written comments, which are discussed in the “Agency Comments” section and are reprinted in appendix I. The Navy accomplishes maintenance on its ships (including submarines) at three levels: organizational, intermediate, and depot. Organizational-level maintenance includes all maintenance actions which can be accomplished by a ship’s crew. For example, the ship’s crew may replace or fix a cracked gasket or leaks around a hatch or doorway aboard ship. Intermediate-level maintenance is accomplished by Navy Intermediate Maintenance Activities (IMAs) for work that is beyond the capability or capacity of a ship’s crew. For example, an IMA performs calibration or testing of selected ship systems for which the ship’s crew may not have the equipment or capability to perform. Depot-level maintenance includes all maintenance actions that require skills or facilities beyond those of the organizational and intermediate levels. As such, depot-level maintenance is performed by shipyards with extensive shop facilities, specialized equipment, and highly skilled personnel to accomplish major repairs, overhauls, and modifications. The Navy determines what depot-level maintenance is needed for its ships through a requirements process that builds from broad maintenance concepts outlined in Navy policy and culminates with the execution of an approved schedule. There are three types of maintenance requirements that are executed: (1) time-directed requirements, (2) condition-based requirements, or (3) modernization requirements. Time-directed requirements are derived from technical directives and include those that are periodic in nature and are based on elapsed time or recurrent operations. Condition-based requirements are based on the documented physical condition of the ship as found by the ship’s crew or an independent inspection team. Lastly, modernization requirements include ship alterations, field changes, and service changes that either add new capability or improve reliability and maintainability of existing systems through design improvements or replacements. Initial depot-level maintenance requirements are determined and a proposed maintenance schedule is developed and approved based on overall ship maintenance policy, specific maintenance tasks, operational requirements, force structure needs, and fielding schedules. These approved maintenance schedules undergo numerous changes as new requirements are identified, others are completed or canceled, operational priorities change, and budgets fluctuate. Thus, these factors result in many deviations from the plan once actual maintenance is executed and complicate the measurement of exactly what maintenance should be considered deferred. Less flexibility in scheduling is permissible with submarines than surface ships because prescribed maintenance must be done on submarines periodically for them to be certified to dive. If the specified maintenance is not done by the time required, the submarine is not to be operated until the maintenance is accomplished. Neither DOD nor the Navy has developed implementing guidance for determining and disclosing deferred maintenance on financial statements. Navy officials said that they are reluctant to develop their procedures until DOD issues its guidance. As we reported to DOD in our September 30, 1997, letter, DOD guidance is important to ensure consistency among the military services and to facilitate the preparation of DOD-wide financial statements. We also stated that the guidance needs to be available as close to the beginning of fiscal year 1998 as possible so that the military services have time to develop implementing procedures and accumulate the necessary data to ensure consistent DOD-wide implementation for fiscal year 1998. We found that operations and comptroller officials from both DOD and the Navy have varying opinions concerning the nature of unperformed maintenance that should be reported as “deferred.” The differences in opinions arise from various interpretations of how to apply the standard to the maintenance process. The views on how to apply the deferred maintenance standard to the ship maintenance process ranged from including only unfunded ship overhauls to estimating the cost of repairing all problems identified in each ship’s maintenance log. Brief descriptions of various views of how SFFAS No. 6 could be applied to disclosing deferred depot-level maintenance for ships follow. The descriptions explain what would be considered deferred maintenance for ships and the rationale for each option. In its budget justification documents, the Navy reports deferred depot-level maintenance for unfunded ship overhauls. The Navy Comptroller officials’ rationale for excluding other types of depot-level maintenance not done is that overhauls represent the Navy’s top priority for accomplishing ship depot-level maintenance and, therefore, should be highlighted for the Congress when a lack of funds prevents them from occurring when needed. While overhauls consumed most of the depot-level maintenance funding in past years, the Navy is performing fewer overhauls as it moves toward a more incremental approach of doing smaller amounts of depot-level work more frequently. Consequently, overhauls now represent a relatively small part of the Navy’s ship depot-level maintenance budget. In fiscal year 1996, over 80 percent of the Navy’s ship depot-level maintenance budget was spent on work other than ship overhauls. Specifically, the Navy reported spending almost $1.7 billion for other ship depot-level maintenance and $367.8 million for ship overhauls. The Navy officials’ rationale for disclosing only unfunded overhauls as deferred depot-level maintenance in financial statements is that the data are readily available and are consistent with what is being reported in budget justification documents. However, this view omits all other types of scheduled depot-level maintenance not done and clearly does not meet the intent of SFFAS No. 6. FASAB addressed the deferred maintenance issue because of widespread concern over the deteriorating condition of government-owned equipment. FASAB reported that the consequences of underfunding maintenance (increased safety hazards, poor service to the public, higher costs in the future, and inefficient operations) are often not immediately reported and that the cost of the deferred maintenance is important to users of financial statements and key decisionmakers. Using this option, the amount disclosed for fiscal year 1996 (the most recent fiscal year data available) would have been $0. Both Atlantic and Pacific fleet officials monitor deferred ship depot-level maintenance and report these backlog amounts to the Navy Comptroller although these amounts are not reported in the Navy’s budget justification documents. These fleet backlog reports quantify the ship depot-level maintenance work that should have been performed by the end of the fiscal year according to the Chief of Naval Operations (CNO) but was not done and was not rescheduled. The rationale for using the amounts on the fleet backlog reports for financial statement reporting is that the data are readily available, and it is a more realistic representation of deferred maintenance than just the unfunded ship overhauls. Using this option, the amount disclosed in the Navy’s financial statements for fiscal year 1996 would have been about $117.5 million. However, the fleet backlog reports do not include any depot-level work rescheduled to future years. Under one approach, the estimated value of work rescheduled beyond the ship’s approved maintenance schedule time frames, as established by the CNO, would also be disclosed. The rationale for adding the estimated value of work rescheduled beyond these time frames is that the CNO Notice provides the Navy’s established requirements for accomplishing ship depot-level maintenance; therefore, any work rescheduled beyond the specified time frames should be considered deferred. For example, maintenance work on two Pacific Fleet destroyers was rescheduled beyond the CNO-specified time frames of June and July 1996, respectively, to October 1996. On the other hand, maintenance on two Atlantic Fleet submarines was rescheduled from the end of one fiscal year to early the next fiscal year but still within CNO-specified time frames. Under this option, the estimated value of the maintenance work rescheduled to the next fiscal year on the destroyers would be recognized as deferred maintenance at the end of the fiscal year. However, the value of the rescheduled work on the submarines would not be recognized because it was still to be performed within the CNO-specified time frames. Under this option, using Navy data, the amount disclosed for fiscal year 1996 would have been about $15.1 million greater or $132.6 million. Another option discussed with Navy officials would be to modify the fleet backlog reports to include the estimated value of any scheduled maintenance work not accomplished during the fiscal year, regardless of the CNO-specified time frames. Under this approach, the estimated value of work on the two submarines discussed above would also be recognized as deferred maintenance. The rationale for this option is that any scheduled work moved to the next fiscal year should be disclosed as deferred maintenance at the end of the fiscal year when the scheduled maintenance was to be performed. Under this option, using Navy data, the amount disclosed for fiscal year 1996 would have been about $188.5 million. Another view discussed with Navy officials for disclosing deferred ship maintenance is to report the costs to perform the needed work on all items listed on each ship’s maintenance log at the end of the fiscal year. The rationale for using this source is that the log may more completely capture all levels of maintenance needed on each ship. Depending on the size and condition of the ship, the maintenance log could contain only a few items or many thousands. However, the Navy does not routinely determine the cost of items that appear on a ship’s maintenance log. Further, although these logs are supposed to be up-to-date and routinely checked for accuracy and completeness, Navy fleet officials stated that estimating the cost to repair the items on each ship’s log would be very time-consuming and costly because maintenance tasks that are accomplished are not routinely deleted from the log, and the time estimates contained in the logs may be inaccurate. Nevertheless, officials said that using the estimated value of all items listed on each ship’s maintenance log would exceed any of the above estimates due to the sheer volume of items included. As discussed in our earlier report, implementing guidance is needed so that all military services consistently apply the deferred maintenance standard. As a result of the variations in the way the deferred maintenance standard can be applied to ships (including submarines), DOD and the Navy need to consider a number of issues, including the following. Acceptable asset condition - SFFAS No. 6 allows agencies to decide what “acceptable condition” means and what maintenance needs to be done to keep assets in that condition. Determining acceptable operating condition could be in terms of whether (1) the ship can perform all or only part of its mission, (2) the most important components of the ship function as intended, (3) the ship meets specified readiness indicators, or (4) the ship and/or its major components meets some other relevant criteria determined by management. The determination may also be influenced by whether the ship is currently deployed or scheduled to be deployed in the near future. An example of the acceptable operating condition issue is as follows. Each ship is composed of many systems, and those systems critical to the ship’s ability to meet its operational commitments and achieve high readiness scores (such as the weapons systems) rarely have maintenance deferred. On the other hand, maintenance on the ship’s distributive systems (such as the ship’s pipes and hulls) are more likely to be deferred since this has little direct impact on the ship’s readiness indicators. Therefore, the question is whether needed maintenance not performed on the distributive systems, should be disclosed as deferred maintenance since it has little impact on the ship’s readiness scores but could affect the ship’s long-term viability. Timing of deferred maintenance recognition - Each ship class has standard operating intervals between visits to the depot; however, changes to this plan may take place as the scheduled maintenance approaches (except for certain maintenance requirements for the submarines and aircraft carriers which have mandated maintenance intervals to meet safety requirements) due to operational considerations, funds available, and condition-based inspections. To ensure that meaningful, consistent data are provided, DOD and the military services will need to decide which one of the many possible alternatives will be used to determine when maintenance needed but not performed is considered deferred. The timing issue involves what needed maintenance should be recognized as deferred as of the end of the fiscal year—the date specified in the CNO Notice, the date the maintenance needs were identified, or the date the maintenance was scheduled. Applicability of the reporting requirements - DOD and the military services will need to determine whether deferred maintenance should be reported for assets that are not needed for current requirements. For example, should maintenance deferred on ships being considered for decommissioning or not scheduled for deployment for a significant period be recognized on DOD’s and the Navy’s financial statements? Reporting the maintenance not done as deferred would more accurately reflect how much it would cost to have all reported assets in an acceptable operating condition; however, it would also be reporting maintenance which is not really needed at this time and which may never be needed or done. Critical and noncritical deferred maintenance - If critical versus noncritical deferred maintenance is to be disclosed, such a disclosure must be consistent among the services, and critical must be defined. For example, different kinds of maintenance needed—from preventive to urgent for continued operation—may be used to differentiate between critical and noncritical. Also, if DOD chooses to disclose deferred maintenance for all reported assets, including maintenance on assets not needed for current requirements, identifying the types of assets included in the deferred maintenance disclosure may be another way to differentiate between critical and noncritical. Although our work focused on the depot level, the deferred maintenance standard applies to all maintenance that should have been done, regardless of where the maintenance should have taken place. Therefore, in addressing the issues in this report and others regarding deferred maintenance, all levels of maintenance must be considered. In comments on a draft of this report (see appendix I), the Department of Defense agreed that it must consider the key issues identified in this report as it implements deferred maintenance reporting requirements. We are sending copies of this letter to the Chairmen and Ranking Minority Members of the Senate Committee on Appropriations, the House Committee on Appropriations, the Senate Committee on Armed Services, the House Committee on National Security, the Senate Committee on Governmental Affairs, and the House Committee on Government Reform and Oversight. We are also sending copies to the Director of the Office of Management and Budget, the Secretary of Defense, the Assistant Secretaries for Financial Management of the Air Force and Army, and the Acting Director of the Defense Finance and Accounting Service. Copies will be made available to others upon request. Please contact me at (202) 512-9095 if you or your staffs have any questions concerning this letter. Cleggett Funkhouser, Merle Courtney, Chris Rice, Rebecca Beale, and John Wren were major contributors to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Department of Defense's (DOD) implementation of the requirement for valuable information related to deferred maintenance on mission assets, focusing on Navy ships, including submarines. GAO noted that: (1) the development of DOD and Navy policy and implementing guidance for deferred maintenance is essential to ensure consistent reporting among the military services and to facilitate the preparation of accurate DOD-wide financial statements, particularly since the new accounting standard provides extensive management flexibility in implementing the disclosure requirement; (2) Navy officials stated that they were reluctant to develop procedures to implement the required accounting standard until DOD issues overall policy guidance; (3) DOD and Navy officials have expressed numerous views as to how to apply the deferred maintenance standard to ships; (4) this makes it even more important for clear guidance to be developed; (5) the opinions ranged from including only unfunded ship overhauls to including cost estimates of repairing all problems identified in each ship's maintenance log; (6) in formulating the DOD and Navy guidance, key issues need to be resolved to allow for meaningful and consistent reporting within the Navy and from year to year including: (a) what maintenance is required to keep the ships in an acceptable operating condition; and (b) when to recognize as deferred needed maintenance which has not been done on a ship; and (7) in addition, DOD needs to address in its implementing guidance whether the: (a) deferred maintenance standard should be applied to all or only certain groups of assets, such as ships being deactivated in the near future; and (b) reported deferred maintenance should differentiate between critical and noncritical and, if so, what constitutes control.
Futures contracts first appeared in the United States in the mid-1800s and were based on grains. They provided producers (farmers) and commodity users with a means of reducing the risk of financial loss arising from adverse fluctuations in commodity prices, called hedging. They also provided a more efficient and transparent means of determining commodity prices based on supply and demand factors, called price discovery. Because of concerns about price manipulation and other trading abuses in the futures market, including the operation of bucket shops, Congress passed the CEA in 1936 to amend the Grain Futures Act of 1922. Like its predecessor, the CEA required that futures trading in specified commodities—such as corn, rye, and wheat—be conducted only on federally designated markets. To receive such a designation, an exchange had to meet certain self-regulatory requirements that included providing for the prevention of manipulation and fraud. Congress periodically amended the act to bring futures trading in additional commodities under the CEA. For example, Congress amended the act in 1968 and brought futures trading in livestock, livestock products, and frozen concentrated orange juice under federal regulation. By the early 1970s, futures trading had expanded to include nonagricultural commodities, such as precious metals and foreign currencies. Although contracts on these commodities were traded on futures exchanges, they were not covered by the act and, thus, were not federally regulated. In 1974, Congress amended the CEA to ensure that all futures contracts—whatever their underlying commodity—would be federally regulated. It accomplished this goal by expanding the list of commodities covered by the act to include virtually anything, tangible or intangible. As a result, the class of instruments that could be defined as futures and subject to the act’s exchange-trading requirement was broadened. Any contract that was legally categorized as a futures contract could be traded only on federally designated exchanges, making the off-exchange trading of futures illegal. The 1974 amendments to the CEA also created CFTC to administer the CEA.The CEA gives CFTC exclusive jurisdiction over futures and establishes a comprehensive regulatory structure designed to protect the futures market and its participants. Historically, CFTC’s regulatory structure was designed to assure that all futures contracts were traded on self-regulated exchanges and through regulated intermediaries, which were subject to capital, examination, recordkeeping, registration, reporting, and customer protection requirements. The CEA’s exchange-trading requirement was intended to foster both market integrity and customer protection by creating a centralized market that could be protected against excessive speculation, price manipulation, and other abusive trade practices. According to the act, regulation of the futures market was necessary to protect the public interest, because futures prices were susceptible to excessive speculation and could be manipulated to the detriment of producers, consumers, and others. Moreover, the act’s legislative history noted that the fundamental purposes of the act were to ensure fair practices and honest dealing in the futures market and to control those forms of speculative activity that demoralize the market to the detriment of producers, consumers, and the markets. While providing for their regulatory oversight, the CEA does not define the term futures contract. Instead, CFTC and the courts have identified certain elements as necessary, but not always sufficient, for defining a futures contract. These elements are the obligation of each party to fulfill the contract at a specified price set at the contract’s initiation, the use of the contract to shift or assume the risk of price changes, and the ability to satisfy the contract by either delivering the underlying commodity or offsetting the original contract with another contract. CFTC and the courts have also identified additional elements of exchange-traded futures contracts, including standardized terms, margin requirements, use of clearinghouses, open and competitive trading in centralized markets (such as futures exchanges), and public price dissemination. These additional elements facilitate futures trading on exchanges but do not define what makes a contract a futures contract. Also, according to CFTC and the courts, the requirement that a futures contract be exchange-traded is what makes the contract legal, not what makes it a futures contract. Because CFTC and the courts have defined a futures contract in a way that reflects its risk-shifting function, the CEA potentially covers a broad range of risk-shifting products that are not exchange-traded. The CEA also provides CFTC with jurisdiction over commodity options,except options on securities and options on foreign currencies traded on a national securities exchange. CFTC’s options jurisdiction is further limited by the recent U.S. Supreme Court decision in CFTC v. Dunn. Commodity options include options to acquire futures contracts (called options on futures) and options to acquire the actual commodity, excluding securities. CFTC has issued regulations to allow futures exchanges, subject to its approval, to trade options on futures in any commodity and options on actual commodities other than domestic agricultural commodities. Futures exchanges have been trading options since 1982, and virtually all options traded on futures exchanges are options on futures. CFTC has also issued regulations to allow certain options on commodities other than domestic agricultural commodities (called trade options) to be traded off-exchange. These OTC options are to be offered and sold to commercial counterparties who enter into transactions for purposes related solely to their business. Since the 1974 amendments to the CEA and the creation of CFTC, the U.S. futures market has evolved far beyond its agricultural origins and is now dominated by futures based on financial products. In 1975, the largest commodity group was domestic agricultural commodities, accounting for nearly 80 percent of total trading volume. By 1996, the largest group was interest rate contracts, accounting for 54 percent of total trading volume. At the same time, agricultural commodities accounted for about 19 percent of total trading volume. According to the exchanges and others, the participants in the futures market have changed as the market evolved. They noted that the participants are now largely institutions and market professionals, with retail customers representing a smaller proportion of total market participants than they did when the act was amended in 1974. During this period, the CEA has remained the primary statute specifically created to regulate the trading of derivative products. OTC derivatives and exchange-traded futures have similar characteristics and economic functions but differ in other ways. The market values of both products are determined by the value of an underlying asset, reference rate, or index. The economic uses of both products include hedging financial risk and investing with the intent of profiting from price changes, called speculating. OTC derivatives and exchange-traded futures differ in the way they are traded and cleared as well as in their degree of standardization. OTC derivatives, which include forwards, options, and swaps, are privately negotiated contracts. They are entered into between counterparties, also called principals, outside centralized trading facilitiessuch as futures exchanges. Counterparties negotiate contract terms—such as price, maturity, and quantity—to customize the contracts to meet their specific economic needs. Because OTC derivatives are entered into on a principal-to-principal basis, each counterparty is exposed to credit risk—the risk of loss resulting from the other party’s failure to meet its financial obligation. In contrast, futures traditionally have been traded on organized exchanges as well as cleared and settled through clearinghouses. Clearinghouses manage counterparty credit risk, in part by substituting themselves as the buyer to every seller and the seller to every buyer. They also guarantee daily settlement of price changes, thereby eliminating the need for the original counterparties to monitor each other’s creditworthiness. Exchange-traded futures generally have standardized terms—except for price, which the market determines. The exchange-traded futures and OTC derivatives markets have followed similar evolutionary paths. Exchange-traded futures developed from forward grain contracts that were customized and traded on a principal-to-principal basis. They evolved into contracts that have standardized terms, except for price, and are traded on centralized exchanges. Similarly, OTC derivatives originated as customized contracts that involved brokers finding and matching counterparties. Today, almost all OTC derivatives are traded through dealers. An industry association has developed standardized documentation for certain OTC derivatives, including swaps. However, each contract, including its material terms, continues to be privately negotiated between the two counterparties. The less complex interest rate and foreign-exchange swaps, called plain vanilla swaps, have become more homogeneous in terms of underlying reference rates or indexes and maturities. The majority of both swaps and exchange-traded futures are settled without delivery of the underlying commodity or financial asset. Because OTC derivatives and exchange-traded futures serve similar economic functions, they can be used as substitutes for one another and thus may compete in the marketplace. However, they are not perfect substitutes because of potential differences in their contract terms as well as transaction costs, regulations, and other factors. OTC derivatives and exchange-traded futures can also complement each other. For example, swaps dealers use exchange-traded futures to hedge the residual risk resulting from unmatched positions in their swaps portfolios. Similarly, food processors, grain elevators, and other commercial firms use exchange-traded futures to hedge their forward positions. To address our two objectives, we reviewed the CEA and its legislative history, Federal Register notices, comment letters, and other material related to CFTC’s exemptions for hybrid, OTC energy, swaps, and exchange-traded futures contracts. We also interviewed CFTC officials, including past commissioners, about the agency’s use of its exemptive authority for OTC derivatives and exchange-traded futures as well as the legal and regulatory issues raised by these markets. Furthermore, we interviewed officials of three futures exchanges (the Chicago Board of Trade, Chicago Mercantile Exchange, and New York Mercantile Exchange), the Federal Reserve Board, the Office of the Comptroller of the Currency, and the Securities and Exchange Commission (SEC) to obtain their views concerning legal and regulatory issues related to the exempted OTC derivatives. In addition, we attended conferences and congressional hearings as well as reviewed legal cases, journal articles, books, and reports pertaining to the CEA and the OTC derivatives and exchange-traded futures markets. Although OTC derivatives raise issues that extend beyond the CEA, we limited our review to the legal and regulatory issues raised within the context of the act. Given this focus, our discussion centered on futures, forwards, and swaps and generally did not cover other financial products, including securities options, asset-backed securities, and structured notes, which are regulated under the federal securities laws. We requested comments on a draft of this report from the heads, or their designees, of CFTC, the Department of the Treasury, the Federal Reserve Board, the Office of the Comptroller of the Currency, and SEC. We also requested comments from three futures exchanges (the Chicago Board of Trade, Chicago Mercantile Exchange, and New York Mercantile Exchange), the New York Stock Exchange, and four industry associations (the Futures Industry Association, International Swaps and Derivatives Association, Managed Futures Association, and National Futures Association). CFTC, the Department of the Treasury, the Federal Reserve Board, and SEC provided us with written comments under a joint response as members of the President’s Working Group on Financial Markets. We also obtained written comments from two futures exchanges (the Chicago Mercantile Exchange and Chicago Board of Trade) and the four industry associations. These comments are discussed at the end of this report and are reprinted in appendixes I through VII. We did not receive written comments from the Office of the Comptroller of the Currency, New York Mercantile Exchange, or New York Stock Exchange. In addition, officials from CFTC, the Department of the Treasury, the Federal Reserve Board, the Office of the Comptroller of the Currency, SEC, the International Swaps and Derivatives Association, and the Chicago Mercantile Exchange provided us with technical comments that were incorporated into the report as appropriate. We did our work in Chicago, New York, and Washington, D.C., between August 1994 and February 1997 in accordance with generally accepted government auditing standards. Before 1993, swaps and other OTC derivatives contracts faced the legal risk of being deemed illegal off-exchange futures and thus unenforceable under the CEA. To reduce this risk and promote innovation and fair competition, Congress granted CFTC exemptive authority under the Futures Trading Practices Act of 1992. CFTC used its authority in 1993 to exempt swaps and other OTC derivatives from most CEA provisions (including the exchange-trading requirement), thereby reducing or eliminating their legal risk. However, a narrow group of swaps that are ineligible for the exemption continue to face the risk of being illegal futures. In addition, certain unregulated forwards have become increasingly difficult to distinguish from regulated futures, resulting in legal risk. The CEA excludes forwards and certain other OTC derivatives from its regulation, but many swaps and other OTC derivatives could not qualify for these exclusions. As a result, they faced the risk that CFTC or a court could find them to be illegal and, thus, unenforceable futures under the CEA. To reduce this legal risk, CFTC issued a policy statement in 1989 to clarify the conditions under which it would not regulate swaps as futures. CFTC’s policy statement, however, did not eliminate the risk of a court finding swaps to be futures. In 1990, a court found certain OTC derivatives that resembled unregulated forwards to be futures, which heightened the legal risk for swaps and other OTC derivatives. Following the court decision, CFTC issued a statutory interpretation holding that the OTC derivatives in question were forwards, not futures. Due to their similarities to futures, swaps and other OTC derivatives faced the legal risk of being deemed futures under the CEA, making them illegal and, thus, unenforceable. These contracts were developed in the 1980s to meet the risk-management, financing, and other needs of market participants. Swaps evolved from parallel loans that involved two parties making loans to each other in equal amounts but denominated in different currencies. Over time, swaps were developed based not only on foreign currencies but also on interest rates, commodities, and securities. These contracts, like forwards, were entered into between two counterparties outside an exchange and could be viewed as serving a similar economic function as a series of forwards. However, swaps differed from forwards in that they typically did not entail delivery of the specified underlying commodity, a hallmark of traditional forwards. As such, swaps generally were not considered forwards for regulatory purposes. Consequently, they did not fall under the CEA’s forward exclusion (discussed below), which would have excluded them from regulation under the act. Nor did many swaps fall under the CEA’s Treasury Amendment (discussed below), which excludes certain OTC transactions in foreign currencies and other financial instruments from regulation under the act. Swaps that could not qualify for an exclusion from the CEA under its forward exclusion or Treasury Amendment faced the possibility of falling within the judicially crafted definition of a futures contract, because they, like futures, served a risk-shifting function. This possibility resulted in legal risk for such swaps by bringing into question their enforceability as futures under the act. If such swaps were found to be futures, they would be illegal and unenforceable, because they would have been traded off-exchange in violation of the CEA’s exchange-trading requirement. Given the legal uncertainty surrounding the status of swaps as futures, swaps counterparties faced legal risk from two sources. First, CFTC could take enforcement action and find swaps to be illegal, off-exchange futures contracts. Second, counterparties on the losing side of swaps could try to have a court invalidate the contracts as illegal, off-exchange futures contracts. To reduce the legal risk of unenforceability in the swaps market, CFTC issued a swaps policy statement in 1989 that clarified the conditions under which it would not regulate certain swaps as futures. In part, CFTC predicated its swaps policy statement on the rationale that swaps lacked certain elements that facilitated futures trading on exchanges, such as standardized terms and a clearinghouse. As such, swaps were not suitable for exchange trading and, in turn, not appropriately regulated as exchange-traded futures contracts. In this regard, CFTC identified conditions (collectively called a safe harbor) that swaps settled in cash could meet to avoid regulation under the CEA. These conditions were that the swaps have individually tailored terms, be used in conjunction with the counterparty’s line of business, not be settled using exchange-style offset or a clearinghouse, and not be marketed to the general public. CFTC’s swaps policy statement did not eliminate all legal risk of unenforceability. It removed the legal risk that CFTC would take enforcement action against certain swaps, but it did not remove the legal risk that a swaps counterparty might try to have a court invalidate a swap as an illegal, off-exchange futures contract. A court finding that a swap was a futures contract could call into question the legality of other swaps—potentially threatening the market’s financial integrity and potentially presenting a source of systemic risk. Following the issuance of CFTC’s swaps policy statement, a federal district court found that certain OTC energy contracts were futures. This finding heightened the legal risk of unenforceability for swaps and other OTC derivatives because of the possibility that a court could also find them to be futures and subject to the CEA’s exchange-trading requirement. Judicial proceedings began in 1986 when commercial participants in the Brent oil market were sued for violating, among other laws, the CEA’s antimanipulation provisions. The participants responded by claiming that the contracts were forwards and excluded from the CEA because no contractual right existed to avoid delivery. In April 1990, a federal district court rejected the claim and found that the contracts were futures, not forwards. The court concluded that even though the contracts did not include a contractual right of offset for avoiding delivery, both the opportunity to offset the contracts and the common practice of doing so were sufficient to determine that the contracts were futures. Furthermore, the court found that the Brent oil contracts, like futures, were undertaken mainly to assume or shift price risk without transferring the underlying commodity. The contracts had highly standardized terms, which facilitated their settlement without delivery and reflected their use for risk-shifting or speculative purposes. On September 25, 1990, CFTC issued a statutory interpretation for forwards that adopted the view that the Brent oil contracts were forwards, not futures. CFTC did not dispute the court’s findings that these contracts were highly standardized and routinely settled by means other than delivery. Rather, it found that the contracts fell under the CEA’s forward exclusion because they required the commercial parties to make or take delivery, even though the parties did not routinely do so. CFTC noted that the contracts did not include any provisions that enabled the parties to settle their contractual obligations through means other than delivery, and the settlement of contracts without delivery was done through subsequent, separately negotiated contracts. In that regard, CFTC noted that these contracts served the same commercial function as forwards covered under the CEA exclusion, notwithstanding the fact that many of the individual contracts were settled routinely without delivery. One CFTC commissioner dissented from the agency’s statutory interpretation, which, he said, misinterpreted the CEA exclusion by broadening it to include transactions that were, among other things, generally standardized, used for noncommercial purposes, and offset. Following the court’s finding that certain OTC energy contracts were futures and recognizing the broader implications of that decision for other OTC derivatives, Congress granted CFTC exemptive authority under the Futures Trading Practices Act of 1992. The 1992 act granted CFTC the authority to exempt any contract from almost all CEA provisions (including the exchange-trading requirement), provided the exemption was consistent with the public interest and the contract was entered into solely between appropriate persons, as defined in the act. In granting an exemption, CFTC could impose any conditions on the exemption that it deemed appropriate. The only provision from which CFTC could not exempt a contract was section 2(a)(1)(B), which generally prohibits futures contracts on individual stocks and narrowly based stock indexes. According to the 1992 act’s legislative history, Congress expected CFTC to use its exemptive authority promptly to reduce legal risk for swaps, forwards, and hybrids. The legislative history noted that the goal of providing CFTC with broad exemptive authority was to give CFTC a means of providing certainty and stability to existing and emerging markets so that financial innovation and market development could proceed in an effective and competitive manner. It also noted that CFTC could exempt a contract without first determining that the contract was a futures contract and subject to the act. Using its exemptive authority, CFTC exempted a broad group of swaps as well as hybrids from virtually all CEA provisions—including the exchange-trading requirement—in January 1993. In response to a request by a group of commercial firms in the energy market, CFTC granted a similar exemption in April 1993 to specified OTC energy contracts, which included Brent oil contracts. These exemptions eliminated the legal risk that the qualifying contracts could be deemed illegal, off-exchange futures contracts. If CFTC or a court found an exempted contract to be a futures contract, the contract would still be legal, because it would no longer need to be traded on a designated market, or exchange. As a result, uncertainty was reduced and with it, the potential for any related systemic risk. At that time, CFTC noted that the exemptions should enhance U.S. market participants’ ability to innovate by enabling them to structure OTC contracts to best meet their economic needs, which should enable market participants to compete more effectively in international markets. In granting its exemptions, CFTC did not determine that the OTC derivatives covered by the exemptions were or were not futures or otherwise excluded from the act’s jurisdiction. CFTC noted that it had not made and was not obligated to make such a determination. CFTC’s swaps exemption does not extend to a narrow group of swaps, so-called equity swaps. Because of the possibility that swaps are futures, these nonexempted swaps continue to face the legal risk of being deemed illegal and, thus, unenforceable futures. CFTC enforcement actions involving OTC derivatives can increase such legal risk for these swaps. CFTC’s swaps exemption does not extend to equity swaps, whose returns are based on stocks or stock indexes. Even if these swaps met all of the conditions of CFTC’s swaps exemption, they would not be exempt from CEA section 2(a)(1)(B), which codified the Shad-Johnson Jurisdictional Accord. Under the 1992 act, CFTC is allowed to exempt swaps from any CEA provision, except section 2(a)(1)(B), which divides jurisdiction on exchange-traded securities-related futures and options contracts between CFTC and SEC and prohibits futures on individual stocks or narrowly based stock indexes. Futures on broadly based stock indexes may be traded only on CFTC-designated markets, provided CFTC determines that the contracts are not settled through the delivery of the underlying stocks and are not readily susceptible to manipulation. SEC must also agree with CFTC’s determinations. According to market observers, if equity swaps were found to be futures contracts, they could be in violation of section 2(a)(1)(B) and thus be illegal and unenforceable. As long as the issue of whether swaps are futures is not definitively addressed by CFTC, the courts, or Congress, the possibility exists that equity swaps could be found to be futures and, thus, subject to the CEA. CFTC has noted, however, that market participants using equity swaps may continue to rely on its 1989 swaps policy statement. As discussed earlier, the policy statement removed the legal risk that CFTC would take enforcement action against certain swaps, but it did not remove the risk that a court could invalidate such contracts by deeming them to be illegal futures. In addition, the legal enforceability of equity swaps could be jeopardized indirectly through a finding that an exempted swap is a futures contract. For example, CFTC had proposed amending its swaps exemption to include a stand-alone, antifraud rule that would apply to exempted swaps. According to other federal regulators and market participants commenting on the proposal, the rule would have suggested that the exempted swaps were futures. This, in turn, would have suggested that equity swaps were also futures. Following the comment period, CFTC did not amend its swaps exemption to include the proposed change. According to the International Swaps and Derivatives Association, a finding that an exempted swap is a futures contract could increase legal risk by prompting losing counterparties to equity swaps to rely on the resulting legal uncertainty to avoid their performance obligations under such contracts. It noted that this could result in substantial losses and a market disruption. At a June 1996 hearing held by the Senate Committee on Agriculture, Nutrition and Forestry, the association testified that the legal risk surrounding equity swaps has inhibited their evolution and that this uncertainty needs to be addressed. The Bank for International Settlements estimated that the worldwide market for equity swaps and forwards had a total notional value of $52 billion, as of March 31, 1995, which accounted for less than 1 percent of the total notional value of the OTC derivatives market. CFTC’s enforcement actions involving OTC derivatives have highlighted the potential for such action to increase legal risk in the equity swaps market. In December 1994, CFTC and SEC cooperated in an enforcement action against BT Securities, a swaps dealer, for violating antifraud provisions of futures and securities laws in connection with swaps it sold. CFTC officials told us that swaps market participants did not want the agency to take any action against the swaps dealer that would suggest swaps were futures for fear of increasing legal risk for equity swaps. In its enforcement order, CFTC did not identify any of the swaps as futures. Rather, it found that BT Securities violated the CEA’s antifraud provisions in its role as a commodity trading advisor by providing the counterparty with misleading information about the swaps. According to market participants and observers, the finding implied that certain of the swaps sold by BT securities were futures or commodity options, which raised questions regarding the status of swaps under the CEA. Recognizing the potential legal and regulatory implications, CFTC issued a news release stating that its actions did not affect the legal enforceability of swaps or signal an intent to regulate them. According to some market participants and observers, CFTC’s enforcement order against MG Refining and Marketing—a commercial firm—resulted in greater legal risk for forwards and equity swaps. In 1995, CFTC took enforcement action against MG Refining and Marketing for selling illegal, off-exchange futures to commercial counterparties. The firm sold contracts that purportedly required the delivery of energy commodities in the future at a price established by the parties at initiation. These contracts provided counterparties with a contractual right to settle the contracts in cash without delivery of the underlying commodity. This right could be invoked if the price of the underlying commodity reached a preestablished level. Based largely on this provision, CFTC found these contracts to be illegal, off-exchange futures. CFTC’s conclusion was consistent with prior court and CFTC decisions; it identified the contractual right to offset as a critical feature distinguishing forwards from futures. Nonetheless, some market participants and observers asserted that CFTC’s order broadened the definition of a futures contract, creating legal uncertainty over whether swaps and other OTC derivatives are futures and resulting in greater legal risk for forwards and equity swaps. In a letter sent to CFTC, two U.S. congressmen expressed their concern about the potential for CFTC’s enforcement order to bring into question the status of swaps as futures and to reflect a change in CFTC’s regulatory position on swaps. In response to the congressional inquiry, the then CFTC chairman wrote that the case had nothing to do with swaps. She noted that, with regard to swaps generally, CFTC had not taken a position on whether swaps were futures and continued to adhere to its 1989 swaps policy statement. She also noted that in this case CFTC did not deviate from its historical practice of looking at the totality of the circumstances—including the nature of the contract and market—in determining whether a particular transaction involved a futures contract. On February 4, 1997, Senator Lugar, Chairman of the Senate Agriculture Committee, Senator Harkin, Ranking Minority Member, and Senator Leahy introduced a bill to amend the CEA. The bill is similar to the one that Senators Lugar and Leahy introduced in the Fall of 1996, following the June 1996 hearing. As noted in a discussion document prepared by Senators Lugar and Harkin, the bill would provide greater legal certainty for equity swaps by codifying the existing swaps exemption and extending the exemption’s scope to include equity swaps. Forwards have been distinguished from futures based on whether the parties intended to make or take delivery of the underlying commodity when they entered into the contract. However, certain unregulated forwards have evolved to where delivery of the underlying commodity may not routinely occur, making it increasingly difficult to distinguish them from regulated futures and resulting in the legal risk that they could be unenforceable. The CEA does not provide clear criteria for distinguishing forwards from futures, but CFTC’s exemptions reduce the need to do so for the purpose of addressing legal risk. As discussed above, since its enactment in 1936, the CEA has excluded forward contracts from its regulation to facilitate the movement of commodities through the merchandizing chain. Absent a definition of a forward contract in the CEA, CFTC and the courts have generally defined these contracts in reference to futures contracts. Traditionally, they distinguished forwards from futures based on whether the parties intended to make or take delivery of the underlying commodity when they entered into the contract. Forwards served primarily a commercial function and, as such, entailed delivery of the underlying commodity in normal commercial channels, but delivery was to occur at a later date. In contrast, futures were used primarily to shift or assume price risk without transferring the underlying commodity; thus, actual delivery was not expected to occur. In short, CFTC and the courts defined a forward as a contract that bound one party to make delivery and the other to take delivery of the contract’s underlying physical commodity. Since forwards were commercial transactions that resulted in delivery, CFTC and the courts looked for evidence of the contracts’ use in commerce. In particular, they examined whether the parties were commercial entities that could make or take delivery and whether delivery routinely occurred. Besides the Brent oil market, other forward markets are evolving in response to the risk-management and commercial needs of their participants. For example, changes in U.S. farm policy, increased globalization of the agricultural markets, and other factors may have increased price volatility in the agricultural markets and created a demand for more innovative risk-management contracts. According to agricultural market participants, traditional forwards do not provide producers with sufficient flexibility because of their delivery requirement. In response to participants’ needs, the forward market for agricultural commodities has evolved to include variations of forwards that may not routinely result in delivery. Contracts that routinely allow parties to offset, cancel, or void delivery obligations rather than transfer the underlying commodity may be viewed as futures contracts or trade options, depending on their pricing structure. CFTC permits the sale of trade options on nonagricultural commodities, but prohibits the sale of such options on domestic agricultural commodities. This prohibition was intended, in part, to protect producers from unscrupulous parties who might try to take advantage of their lack of knowledge about these options. One variation of a forward experiencing increased use is the hedge-to-arrive contract. Although varying in design, these are privately negotiated contracts in which a producer agrees with an elevator to deliver grain on a future date at an agreed-upon price, and the elevator uses exchange-traded futures to hedge the sale on behalf of the producer. Some of these contracts have allowed producers to defer the delivery dates on their contracts beyond the current crop year, which has exposed producers to significant price risk because their contracts were no longer tied to the current crop year. According to market observers, unusual factors, such as high grain prices and poor weather conditions, have resulted in financial problems for some parties that deferred delivery into future crop years. In May 1996, CFTC staff issued a policy statement for hedge-to-arrive contracts to allow counterparties experiencing losses to settle their contracts without delivery by entering into subsequent, separately negotiated contracts. CFTC noted that it would not find hedge-to-arrive contracts existing as of May 15, 1996, to be illegal based solely on the cash settlement of such contracts for the purpose of unwinding them, but may find them to be illegal based on other factors. CFTC or a court could find some hedge-to-arrive contracts or other variations on agricultural forwards to be futures or agricultural trade options. Either finding would make them illegal and unenforceable, provided the contracts did not qualify for the swaps exemption. For example, in November 1996, CFTC filed three administrative complaints, two of which alleged, among other things, that two elevators had offered and sold hedge-to-arrive contracts that were illegal, off-exchange futures. In these two complaints, CFTC noted that the elevators sold the hedge-to-arrive contracts to some producers who lacked the intent or capacity to make delivery of the grain. CFTC also noted some producers did not qualify as eligible participants under the swaps exemption. CFTC further noted that the contracts contained a cancellation provision that permitted producers to effect an offset of their contracts. While the CEA excludes forwards from its regulation because of their commercial merchandizing purpose, it does not provide clear criteria for distinguishing forwards from futures. In particular, the CEA does not specify what constitutes delivery under the forward exclusion and, thus, when a forward becomes a futures contract. Given the lack of clear criteria, the evolution of certain forwards to where delivery may not routinely occur has made it increasingly difficult to distinguish unregulated forwards from regulated futures. As illustrated by the Brent oil and hedge-to-arrive contracts, the difficulty in distinguishing between forwards and futures can result in legal risk. Under its 1990 statutory interpretation for forwards (discussed above), CFTC tried to reduce the legal risk and regulatory constraints that forwards face because of the delivery requirement, thereby permitting them to evolve to better meet the economic needs of end-users. However, its interpretation does not provide a clear basis for distinguishing forwards from futures in terms of their economic purpose. For example, it does not preclude forwards from being settled routinely without delivery and, in the process, being used primarily for risk-shifting or speculative purposes instead of a commercial merchandizing purpose. CFTC’s exemptions for OTC energy and swaps contracts reduce the need to distinguish unregulated forwards from regulated futures for the purpose of addressing the legal risk of being unenforceable. CFTC’s OTC energy contract exemption reduces legal risk for certain forwards that routinely settle without delivery, but it is limited to OTC derivatives based on specified energy products. Although the exemption covers Brent oil contracts that CFTC determined earlier to be forwards under its 1990 interpretation, CFTC noted that the exemption does not affect its interpretation. However, as with its 1989 swaps policy statement, CFTC’s forward interpretation does not eliminate all legal risk. It removes the legal risk of CFTC taking enforcement action against a contract that is consistent with its interpretation, but it does not eliminate the risk of a counterparty trying to have a court invalidate the contract as an illegal, off-exchange futures contract. CFTC’s swaps exemption further reduces the need to distinguish unregulated forwards from regulated futures to address legal risk. Contracts that resemble forwards but do not entail delivery may qualify for the swaps exemption. Qualifying contracts would not be illegal and unenforceable, even if CFTC or a court found them to be futures, because they would be exempt from the exchange-trading requirement. The swaps exemption is limited to “eligible” participants, which are largely institutional and other sophisticated market participants. Consequently, the exemption generally does not extend to contracts that involve unsophisticated market participants. Notwithstanding CFTC’s success in reducing or eliminating the legal risk of unenforceability that most OTC derivatives faced, issues remain that raise a broader policy question about the appropriate regulation for OTC derivatives and exchange-traded futures, including their markets and market participants. Congress alluded to this topic in the legislative history of the Futures Trading Practices Act of 1992 by noting that the growth and proliferation of OTC derivatives raises questions of how best to regulate the new market, adding that studies by us and others would be useful when Congress considers the broader question of regulatory policy. To that end, we discuss, but do not attempt to resolve, three issues that are related to the question of how best to regulate the OTC derivatives and exchange-traded futures markets. These issues concern the (1) appropriate regulation for the OTC foreign-currency market under the Treasury Amendment, (2) appropriate regulation for the evolving swaps market, and (3) rationalization of regulatory differences between the OTC derivatives and exchange-traded futures markets. The CEA excludes, among other things, certain OTC foreign-currency transactions from CFTC regulation under its Treasury Amendment. However, the scope of the amendment has been difficult to interpret and the subject of considerable debate and litigation. CFTC has interpreted the amendment to exclude from the act’s regulation certain OTC foreign-currency transactions between sophisticated participants, but not similar transactions involving unsophisticated participants. The Treasury Department has disagreed with CFTC’s interpretation. While the federal courts have differed in their interpretation of the Treasury Amendment, they have recognized congressional intent to exclude the interdealer OTC foreign-currency market from regulation under the CEA. The Treasury Amendment excludes from CFTC regulation certain OTC transactions in, among other things, foreign currencies and government securities. During the debate over the 1974 amendments to the CEA, the Treasury Department expressed concern that the proposed changes—namely the expansion of the commodities covered under the act coupled with the exchange-trading requirement—would prohibit banks and other financial institutions from trading among themselves in foreign currencies and certain financial instruments, including government securities. The Treasury Department noted that futures trading in foreign currencies was done through an informal network of banks and dealers (called the interbank market), which serves the needs of international business to hedge risk stemming from foreign-exchange rate movements. The Treasury Department proposed the Treasury Amendment as a means of clarifying that the CEA did not cover this market, and Congress adopted the proposed amendment. According to the act’s legislative history, Congress noted that the interbank market was more properly supervised by the bank regulators and, therefore, regulation under the CEA was unnecessary. The Treasury Amendment has been difficult to interpret because its language is ambiguous. Although the amendment was motivated primarily by concern that the interbank foreign-currency market should be excluded from regulation under the act, its language is not limited to the interbank market. Rather, it excludes any transaction in, among other things, foreign currencies, unless the transaction involves sale for future delivery conducted on a board of trade. Before the recent U.S. Supreme Court decision in Dunn v. CFTC, considerable debate occurred over the meaning of the phrase “transactions in,” which defines the scope of the exclusion. Arguments were made that the phrase could be interpreted narrowly to mean only cash transactions in the subject commodity or broadly to encompass derivatives transactions such as futures or option contracts. In Dunn, the U.S. Supreme Court endorsed the broader interpretation. Furthermore, the CEA defines the term “board of trade,” which is used in the “unless” clause, to “mean any exchange or association, whether incorporated or unincorporated, of persons who shall be engaged in the business of buying or selling any commodity.” Consequently, this clause could be interpreted to save from the exclusion virtually any futures or option contract sold by a dealer, a construction that would render the amendment meaningless. The ambiguity of the statutory language has led to disagreements among regulators and courts over how the amendment ought to be interpreted. Because of its significant market impact, the activity that the Treasury Amendment excludes from regulation under the CEA has been the subject of considerable debate among federal regulators. Since at least 1985, CFTC has interpreted the Treasury Amendment to exclude from the act’s regulation certain OTC transactions between banks and other sophisticated institutions, drawing a distinction between sophisticated market participants and unsophisticated market participants who may need to be protected by government regulation. An OTC foreign-currency transaction, such as a foreign-exchange swap, sold to a financial institution would be excluded from the act’s regulation; a similar contract sold to the general public would not be excluded. CFTC drew this distinction to preserve its ability to protect the general public from, among other things, bucket shops engaging in fraudulent futures transactions—one of its missions under the CEA. According to CFTC, since 1990, the agency has brought 19 cases involving the sale of foreign-currency futures or options contracts to the general public; in those cases, more than 3,200 customers invested over $250 million, much of which was lost. Whether foreign-currency contracts sold to the general public are excluded by the Treasury Amendment, however, has remained a source of legal uncertainty. According to CFTC, if the amendment were interpreted to cover contracts sold to the general public, the agency’s ability to prohibit the fraudulent activities of bucket shops dealing in foreign-currency contracts would be effectively eliminated, creating a regulatory gap. The Treasury Department, however, has objected that CFTC’s approach to the Treasury Amendment lacks a foundation in the language of the statute. It has advocated the reading of the Treasury Amendment adopted by the U.S. Supreme Court in Dunn—that is, the Treasury Amendment excludes from CFTC jurisdiction any transaction in which foreign currency is the subject matter, including foreign-currency options, unless conducted on a board of trade. Nevertheless, it has expressed sympathy with CFTC’s concerns over fraudulent foreign-currency contracts marketed to the general public. The Treasury Department has suggested that CFTC may be able to interpret the term “board of trade” in a carefully circumscribed manner that would allow appropriate enforcement action against fraud without raising questions about the validity of established market practices. The federal courts have differed in their interpretation of what activity the Treasury Amendment excludes from regulation under the CEA. In spite of these differences, the courts have recognized congressional intent to exclude the interdealer foreign-currency market from regulation. However, past court cases have highlighted the legal confusion over whether the Treasury Amendment excludes from the act’s regulation transactions in foreign currencies that involve the general public. The Second Circuit Court of Appeals held in Dunn that option contracts are not covered by the Treasury Amendment and, therefore, are subject to CFTC jurisdiction. In doing so, it followed a precedent that it had established in a case involving the sale of currency options to private individuals. In that case, it reasoned that an option contract does not become a transaction in foreign currency that is excluded under the Treasury Amendment until the option holder exercises the contract. In February 1997, the U.S. Supreme Court reversed the Second Circuit’s decision in Dunn. The Court interpreted the “transactions in” language of the Treasury Amendment to exclude from CFTC regulation all transactions relating to foreign currency, including foreign-currency options, unless conducted on a board of trade. The Court noted that the public policy issues raised by the various parties affected by the decision were best addressed by Congress. The Fourth Circuit Court, in Salomon Forex, Inc. v. Tauber, held that sales of currency futures and options to a very wealthy individual are transactions in foreign currency that the Treasury Amendment excludes from regulation. The buyer of the contracts brought the action to avoid payment on transactions in which he had lost money. The court interpreted the amendment to exclude from the CEA individually negotiated foreign-currency option and futures transactions between sophisticated, large-scale currency traders. The court observed that the case did not involve mass marketing of contracts to small investors and stated that its holding did not imply that such marketing was exempt from the CEA. The Ninth Circuit Court, in CFTC v. Frankwell Bullion Ltd., affirmed a lower court holding that the Treasury Amendment excludes the sale of off-exchange foreign-currency futures and options from the CEA without regard to whom the contracts are sold. CFTC brought action to stop the seller of the contracts from allegedly selling illegal, off-exchange futures contracts to the general public. The Ninth Circuit Court’s review focused on the meaning of the clause “unless . . . conducted on a board of trade.” The court interpreted the clause to carve out of the exclusion only contracts sold on an organized exchange. The court acknowledged that the plain meaning of a board of trade as defined by the act would include more than exchanges. But the court rejected this interpretation in the context of the Treasury Amendment because it would cause the “unless” clause to encompass the entire exclusion and thereby render the amendment meaningless. Turning to congressional reports accompanying the 1974 legislation to explain the purpose of the Treasury Amendment, the court concluded that Congress intended to exclude from the CEA all transactions in the listed commodities except those conducted on an organized exchange. In December 1996, CFTC filed a petition with the Ninth Circuit Court requesting a rehearing, which was denied. At the June 1996 hearing held by the Senate Committee on Agriculture, Nutrition and Forestry, the then acting CFTC chairman testified that the agency and Treasury Department were working to clarify the treatment of foreign-currency transactions under the Treasury Amendment, but that reaching an accord would take time. At the hearing, two futures exchanges testified that congressional action was needed to clarify the Treasury Amendment’s scope, particularly in view of the U.S. Supreme Court’s decision to review the Dunn case. They said that a court finding that the amendment excludes all off-exchange futures and options on foreign currencies could shift such business away from the exchanges to the less regulated OTC market and adversely affect their competitiveness. As mentioned earlier, Senators Lugar, Harkin, and Leahy introduced a bill in February 1997 to amend the CEA. The bill includes a provision to clarify the scope of the Treasury Amendment. According to a discussion document prepared by Senators Lugar and Harkin, the bill reflects the view that a federal role is needed in the market to protect retail investors from abusive or fraudulent activity in connection with the sale of foreign currency futures and options by unregulated entities. The discussion document further notes that under the bill CFTC has no jurisdiction over retail transactions that are subject to oversight by other federal regulators or nonretail transactions. On January 21, 1997, Congressman Ewing, Chairman of the House Subcommittee on Risk Management and Specialty Crops, introduced a bill to amend the CEA. The bill is identical to the one that he introduced in the Fall of 1996. It proposes, among other things, to amend the Treasury Amendment to clarify that CFTC has regulatory authority only over standardized contracts sold to the general public and conducted on a board of trade. The bill defines board of trade in the context of the Treasury Amendment as “any facility whereby standardized contracts are systematically marketed to retail investors.” The potential for the exempted swaps market to evolve beyond the conditions of the swaps exemption raises the issue of how to accommodate market developments and address attendant risks and other regulatory concerns. CFTC imposed conditions on exempted swaps that prohibited them from being traded and cleared in the same ways as exchange-traded futures—on a centralized trading facility and through a clearinghouse. Since then, the swaps market has continued to develop, becoming more liquid and transparent. Among other alternatives, CFTC could use its exemptive authority to accommodate any development that is inconsistent with the conditions of the existing exemption—for example, the development of a clearinghouse—and address any attendant risks to the market. However, such an approach could prompt legal challenges and raise jurisdictional questions. CFTC’s swaps exemption allows exempted swaps to trade legally outside regulated exchanges—free from all CEA provisions, except certain antifraud and antimanipulation provisions, and free from all CFTC regulations. In granting the swaps exemption, CFTC did not take a position on whether exempted swaps were futures contracts and subject to the CEA’s jurisdiction. CFTC noted that it had not made and was not obligated to make such a determination. CFTC specified four conditions that swaps had to meet to qualify for an exemption. First, they had to be entered into solely by eligible participants, namely institutional and other sophisticated market participants. Eligible participants include banks, securities firms, insurance companies, commercial firms meeting minimum net worth requirements, and individuals meeting minimum total asset requirements. Second, they could not be fungible with standardized, material economic terms. Third, the creditworthiness of the counterparties had to be a material consideration. With this condition, exempted swaps could not be cleared, like exchange-traded futures, through a clearinghouse. Fourth, they could not be entered into and traded on or through a multilateral execution facility, such as a futures exchange. According to CFTC, these four conditions were intended to reflect the way that swaps transactions occurred in 1993 when the exemption was granted and to draw a line at which such transactions would not raise significant regulatory concerns under the CEA. CFTC officials told us that Congress directed the agency to exempt swaps as they were then transacted to provide them with legal certainty. In addition, the four conditions distinguished the exempted swaps from exchange-traded futures for regulatory—not legal—purposes. That is, the exemption excluded from regulation under the CEA swaps that did not possess certain characteristics common to exchange-traded futures; it did not establish that exempted swaps were not futures or otherwise excluded from the act’s jurisdiction. The conditions generally reflected the elements that facilitate futures trading on an exchange, including standardized units, a clearinghouse, and open and competitive trading in a centralized market. As CFTC and the courts have noted, these elements developed in conjunction with the growth of the futures market to facilitate futures trading on exchanges; however, their presence or absence does not necessarily determine whether a contract is a futures contract. CFTC and others (including federal regulators and market observers) have acknowledged that a centralized trading facility and/or clearinghouse could benefit the swaps market and general public. For example, such facilities could increase the market’s liquidity and transparency and enhance the market’s financial integrity. In its 1993 exemptive release for swaps, CFTC noted that such facilities did not yet exist and their existence would present different regulatory issues than are raised under the current swaps exemption. Recognizing the potential benefits of such facilities, CFTC left open the opportunity for market participants to develop and use such facilities, provided that such facilities receive CFTC’s prior approval. As discussed, Senators Lugar, Harkin, and Leahy recently introduced a bill to amend the CEA that includes a provision to codify the existing swaps exemption. As noted in the discussion document prepared by Senators Lugar and Harkin, the provision would not affect CFTC’s power to grant additional exemptions or to amend the existing exemption to make it less restrictive. However, the provision would require a statutory change to make the existing swaps exemption more restrictive. According to market observers, the provision addresses the concern of OTC market participants that CFTC could modify the swaps exemption in a way that could disrupt the market. At a February 11, 1997, hearing held by the Senate Committee on Agriculture, Nutrition and Forestry, CFTC testified against the provision, noting that it would eliminate the agency’s ability to modify the existing swaps exemption in response to market developments. Under the swaps exemption, the swaps market has become more liquid and transparent. Swaps are traded primarily through dealers, some of whom are linked through electronic communication networks that allow them to exchange price information and negotiate transactions. Swaps are commonly executed using standardized documentation, but each contract—including its material terms—continues to be privately negotiated between two counterparties. As mentioned above, plain vanilla interest rate and foreign-exchange swaps have become more homogeneous, with dealers providing “indicative” (nonbinding) quotes for such swaps. Market participants have noted that the market for plain vanilla interest rate swaps has become very liquid and transparent, with pricing information readily available from independent sources. Increased liquidity and transparency can facilitate the use of offsetting contracts to terminate open contracts. Some swaps market participants are increasingly using practices that are similar, but not identical, to those used in the exchange-traded futures market to reduce credit and other risks. These practices may reduce systemic risk and encourage greater market efficiency. Some swaps participants are using bilateral netting, which is the combining of payment obligations arising from multiple transactions with one counterparty into one net payment. In addition, some are periodically determining the value of their swaps using market values, called marking-to-market. This practice facilitates the movement of collateral, such as cash or U.S. government securities, to reduce the financial exposure of counterparties from open contracts. In comparison, exchanges reduce credit risk by collecting margin (payment required on open contracts that decline in value) on at least a daily basis and by interposing a clearinghouse as the guarantor of all contracts. As discussed above, in each exchange-traded futures transaction, the clearinghouse is substituted for the original parties, becoming the buyer to every seller and the seller to every buyer. Through this process, the clearinghouse assumes the credit risk of each transaction and mutualizes it among all clearing members. While swaps market participants do not use clearinghouses, two futures exchanges are developing collateral depositories to help manage swaps positions and collateral for OTC market participants. Unlike a clearinghouse, they would not guarantee contract performance. One exchange has reported that it is developing exchange-traded swaps and plans for its depository to ultimately guarantee their performance. Although difficult to predict, the swaps market might develop in ways that are inconsistent with the conditions of the existing swaps exemption. Such developments could present risks to the market that warrant greater federal regulation to protect the public interest. An example of such a development would be the creation of a swaps clearinghouse. A clearinghouse could provide benefits, such as reducing credit risk and increasing market access, but it could also increase systemic risk by concentrating credit risk in a single entity and thus might require federal oversight. CFTC’s swaps exemption does not bar a clearinghouse, but it does require that a proposal for such a facility be submitted to CFTC for review. As noted above, CFTC’s swaps exemption includes a condition that requires each counterparty to consider the other’s creditworthiness. Because of this requirement, swaps market participants may not be able to use a clearinghouse without jeopardizing their exempt status and becoming subject to the CEA’s regulatory requirements. According to CFTC, the development of a swaps clearinghouse would not necessarily require CFTC to amend the exemption. Instead, CFTC could exempt a swaps clearinghouse from the CEA’s provisions (except section 2(a)(1)(B)) on such conditions as it deemed appropriate. According to CFTC officials, the extent to which CFTC would need to impose conditions on a clearinghouse would depend on the facility’s design, applicability of other regulatory regimes, and other factors. Among other alternatives, CFTC could use its exemptive authority to accommodate a swaps clearinghouse or any other market development that is inconsistent with the conditions of the existing swaps exemption. In accommodating such a swaps market development, CFTC may need to include conditions in the exemption to ensure that the risks and other regulatory concerns of the development are appropriately addressed. Depending on the risks and concerns, such conditions may include reporting, recordkeeping, disclosure, or other regulatory requirements that are similar to the regulations that CFTC has imposed on the OTC derivatives under its oversight—trade options, dealer options, and leverage contracts. Imposing regulatory conditions on swaps participants might be an effective way for addressing potential risks to the market that could result from a swaps market development. However, such an approach could prompt legal challenges and raise jurisdictional questions. First, as long as the issue of whether swaps are futures is not definitively addressed, the possibility remains that a court could find swaps to be outside the jurisdiction of the CEA if CFTC tried to use its exemptive authority to impose affirmative requirements on swaps. Second, imposing affirmative requirements on swaps might suggest that swaps are futures and subject to regulation under the CEA, even if CFTC did not explicitly make that determination. Any suggestion that swaps are futures and subject to regulation under the CEA could have policy ramifications for the swaps market because of CFTC’s exclusive jurisdiction over futures. Any such suggestion could also raise jurisdictional questions involving federal bank regulators and SEC because of their oversight or regulation of swaps participants or swaps. Tasked with considering new developments in the financial markets, including the increasing importance of the OTC derivatives market, the President’s Working Group on Financial Markets provides one forum through which CFTC and other federal regulators could address such issues. The development of the swaps and exchange-traded futures markets has raised questions about the rationale for their regulatory differences—recognizing that each market may not raise the same risks and, thus, warrant the same regulations. Swaps and exchange-traded futures are similar in their characteristics and economic functions, but differ in, among other ways, their trading environment and regulations. As discussed above, CFTC exempted swaps and other OTC contracts from regulation under the CEA. In 1995, CFTC also granted the exchanges an exemption from certain regulations to enable them to compete more effectively against the less regulated OTC derivatives market. Notwithstanding the exemption, OTC derivatives and exchange-traded futures market regulations continue to differ substantially. The exchange exemption represents one approach to rationalizing regulations between the two markets but also illustrates some of the challenges in doing so. Swaps and exchange-traded futures are similar in their characteristics and economic functions but differ in other ways, including the scope and focus of their regulation. Swaps and exchange-traded futures have market values that are determined by the value of an underlying asset, reference rate, or index. They also are used for hedging financial risk and investing with the intent of profiting from price changes by some of the same general types of market participants, such as financial institutions, commercial firms, and governmental entities. Given their similar economic functions, OTC derivatives and exchange-traded futures can be used as substitutes for one another, but they are not perfect substitutes because of differences in their contract terms, transaction costs, regulations, and other factors. They also can be used to complement each other. Some market participants—primarily banks and other financial firms acting as dealers—use exchange-traded futures to hedge the risk related to their OTC derivatives positions. As a former CFTC chairman noted, the exchange-traded futures market has grown closer to the swaps market as it has expanded to remain competitive. The exchanges are offering more flexible option contracts, whose terms can be customized to meet an end-user’s particular risk-management needs. Moreover, they are working on other proposals, such as collateral depositories, to address the needs of participants using swaps and other OTC derivatives. Notwithstanding their similar characteristics and economic functions, differences between swaps and exchange-traded futures may result in different risks that lead to differences in the types and/or levels of oversight needed for each market. Swaps and exchange-traded futures differ in ways that are reflected in CFTC’s swaps exemption. As discussed above, unlike exchange-traded futures, swaps are not traded on a multilateral execution facility, such as an exchange, or cleared through a multilateral clearing facility, such as a clearinghouse. Rather, swaps are entered into between two counterparties in consideration of each other’s creditworthiness. Although plain vanilla swaps have become more homogeneous in terms such as their underlying reference rates or indexes and maturities, each contract continues to be privately negotiated. Unlike exchange-traded futures, swaps and other OTC derivatives are not regulated under a single, market-oriented structure or subject to a contract approval process, because they are privately negotiated contracts. They are regulated only to the extent that the institutions using or dealing in them are regulated. As we noted in our May 1994 report, banks are major OTC derivatives dealers. They are overseen by federal bank regulators and subject to supervision and regulations—including minimum capital, reporting, and examination requirements. These regulations are designed to ensure the safety and soundness of banks but are not directly concerned with protecting those doing business with them. Other major dealers include affiliates of securities and insurance firms that are subject to limited or no federal oversight. Since our 1994 report, CFTC, federal bank regulators, and SEC have taken several steps to improve their oversight of the major OTC derivatives dealers, including affiliates of securities firms. Also, a group of derivatives dealers, in coordination with SEC and CFTC, has developed a voluntary oversight framework for the OTC derivatives activities of unregulated affiliates of securities and futures firms. We discuss these and other actions taken by federal regulators and derivatives market participants in the November 1996 update to our 1994 report on financial derivatives. Traditionally, exchange-traded futures have been regulated as a market under a comprehensive regulatory structure, which is designed to protect customers and the market—including its efficiency, fairness, and financial integrity. This regulatory structure covers not only certain market participants but also the products and markets on which they trade. Unless exempted, futures must be traded on designated exchanges and through regulated intermediaries, subject to minimum capital, reporting, examination, and customer protection requirements. The CEA and CFTC specify certain self-regulatory duties—including providing for the prevention of manipulation, making reports and records on market activities, and enforcing exchange rules—that an exchange must perform to become and remain a designated exchange. The CEA also requires CFTC to review and approve products traded on a designated exchange. In 1993, two futures exchanges separately requested that CFTC exempt from most of the CEA’s regulatory requirements certain exchange-traded futures that are traded solely by institutional and other sophisticated market participants. The exchanges indicated that they needed regulatory relief to compete fairly with the less regulated OTC market. In response to the exchange requests, CFTC provided the exchanges with regulatory relief under an exemption issued in November 1995. CFTC, however, did not provide the exchanges with the broad regulatory relief they requested. CFTC based its position, in part, on comments it received on the exchange requests from various government agencies, members of Congress, and the public, as well as on the 1992 act’s legislative history. In the latter, Congress cautioned CFTC to use its exemptive authority sparingly and not to prompt a wide-scale deregulation of markets falling under the act. The exchange exemption is to be implemented under a 3-year pilot program. It is intended to enable qualifying exchanges to list new contracts with greater ease and construct OTC-like trading procedures, permitting market participants to negotiate prices privately and execute trades off of the exchange floor. The exchange exemption limits access to the exempted futures market to specified participants, which are generally the same institutional and sophisticated participants that may use exempted swaps. In addition, the exemption is intended to streamline requirements for registering brokers and disclosing risks when opening new customer accounts. However, with the exception of these regulatory changes, all other CEA provisions and CFTC regulations would continue to apply to the exempted futures market. For example, the requirements related to recordkeeping and audit trails as well as transaction reporting would continue to apply. According to CFTC, the exchange exemption would enable the exchanges to compete more effectively with the OTC derivatives market, while maintaining basic customer protection, financial integrity, and other protections needed for trading in an exchange environment. Furthermore, CFTC noted that the pilot program would provide it with an opportunity to (1) test the operation of the exemption, (2) determine the effect of exempted transactions on the integrity of the market as a whole, and (3) determine whether continued trading under the exemption would be in the public interest. To date, CFTC has not received any proposals under the exchange exemption. In a joint statement released at the June 1996 Senate Agriculture hearing (discussed above), 10 futures exchanges noted that the exchange exemption does not provide a level playing field for exempted exchange-traded and OTC derivatives contracts. They noted that exempted exchange-traded contracts would continue to be subject to the bulk of CFTC regulations, even though such contracts, like exempted OTC derivatives, would not be traded by public customers. The exchanges also maintained that CFTC’s exchange exemption is not consistent with the 1992 act’s legislative history—noting that, among other things, Congress intended CFTC, in consideration of fair competition, to use its exemptive authority in a fair and even-handed manner to products and systems sponsored by exchanges and nonexchanges. As mentioned earlier, Senators Lugar, Harkin, and Leahy as well as Congressman Ewing recently introduced bills to amend the CEA. Each bill includes a provision that would largely exempt from regulation under the act certain exchange-traded futures that are traded solely by institutional and sophisticated market participants. In a joint statement released at the February 11, 1997, hearing on reforming the CEA, 10 futures exchanges noted that the Senate bill “moves exchanges a long way toward achieving a regulatory balance with the OTC markets.” They noted that the exempted market would rely on market discipline and self-regulation, with the exchanges having a business incentive to operate a fair, financially sound, and competitive market. At the same hearing, CFTC testified that, if enacted, the bill would likely cause a broad elimination of federal regulation of the exchange-traded futures market and create significant risks by doing so. CFTC’s exchange exemption represents one approach to rationalizing regulatory differences between the exchange-traded futures and swaps markets but illustrates some of the challenges in doing so. The exchange and swaps exemptions raised similar policy questions that CFTC approached from opposite viewpoints, in part because of the existence of a regulatory structure for one but not the other. For futures, the basic question was: “What is the appropriate regulation for futures traded on exchanges solely by institutional and other sophisticated market participants?” In this regard, CFTC’s approach to exempting exchange-traded futures focused on determining which CEA requirements could be eliminated without compromising the public interest, as defined in the CEA. Under this approach, the exchanges were tasked, in part, with demonstrating which existing regulations were unnecessary. In comparison, the basic question for swaps was: “Are swaps appropriately regulated under the CEA?” In this regard, CFTC’s approach to exempting swaps focused on determining whether CEA requirements needed to be imposed on the market. Another related challenge in rationalizing regulations between the two markets arose from the similar nature of the participants. As required under its exemptive authority, CFTC considered the nature of the market participants in exempting swaps. It limited the swaps exemption to participants it deemed sophisticated or financially able to bear the risks associated with these transactions. Likewise, it considered the exclusion of unsophisticated participants from the exempted exchange-traded futures market as the most important factor supporting its exchange exemption. However, CFTC noted that, unlike a dealer market, a centralized market composed solely of sophisticated market participants did not obviate the need to ensure market integrity, price dissemination, and adequate protections against fraud, manipulation, and other trading abuses. It further noted that CFTC regulations serve other vital functions, even where such markets include only sophisticated participants, in that the regulations substitute for individualized credit determinations and increase market access. The exchanges have disagreed with CFTC’s conclusions. They have stated that their safeguards—including clearinghouse guarantees and price transparency—provide greater protections than available in the OTC market but, at the same time, prevent them from obtaining regulatory relief comparable to that which CFTC provided to the OTC market. CFTC has used its exemptive authority to reduce or eliminate legal risk in the OTC derivatives market arising from the combination of the CEA’s judicially crafted futures definition and exchange-trading requirement. Through its efforts, CFTC has enhanced the legal enforceability of most OTC derivatives contracts and, in doing so, has enabled the OTC derivatives market to continue to grow and develop. Nonetheless, several legal and regulatory issues involving the CEA remain unresolved. These include the legal uncertainty facing equity swaps, the CEA’s lack of clear criteria for distinguishing unregulated forwards from regulated futures, the uncertainty surrounding the scope of the Treasury Amendment, and the extent to which CFTC should use its exemptive authority to provide greater regulatory relief to the futures exchanges. Ongoing congressional efforts to amend the CEA could provide specific solutions to these unresolved issues. Further, such efforts could provide a forum for addressing the broader policy question of what the appropriate regulation is for exchange-traded futures and OTC derivatives contracts, including their markets and market participants. The appropriate regulation for the exchange-traded and OTC derivatives markets should flow from the need to protect the public interest in these markets. The CEA identifies the public interest in the futures market as the need to protect the market’s price discovery and risk-shifting functions from market abuses, such as excessive speculation, manipulation, and fraud. However, articulating the public interest in this way may no longer provide a sufficient basis for regulating all aspects of the futures market, given market developments and regulatory changes. As discussed, the exchange-traded futures market is now dominated by financially based futures and institutional participants. Because of the greater liquidity of the underlying cash markets for financial products, the exchange-traded futures markets for these products may not serve the same price discovery function as exchange-traded futures based on agricultural and other physical commodities. Accordingly, they may not serve the price discovery function that Congress intended to protect when crafting the CEA. In addition, CFTC now has the authority to allow futures to be traded off-exchange and free from the comprehensive regulatory structure applicable to exchange-traded futures. Because of the way they would be traded and other factors, off-exchange futures may not raise the same risks or regulatory concerns that exchange-traded futures raise and for which regulation under the CEA was deemed necessary to protect the public interest. Nonetheless, off-exchange futures may raise other risks, such as systemic risk, or regulatory concerns that warrant federal regulation. To address the broader policy question of the appropriate regulation for the exchange-traded futures and OTC derivatives markets, more fundamental questions concerning the goals of federal regulatory policy need to be answered. These questions include: What is the current public interest in the exchange-traded futures and OTC derivatives markets that needs to be protected? What type of regulations are needed, if any, and what is the most efficient and effective way to implement and enforce any needed regulations? To what extent are the answers to these questions affected by the nature of the market participants; trading environment; and products, including their function, type of underlying commodity, and degree of standardization? These fundamental questions provide a framework for systematically determining the appropriate regulation for exchange-traded futures and OTC derivatives, including their markets and market participants. Moreover, answers to these questions would also provide a basis for considering an array of options for amending the CEA. These options include (1) expanding the act’s jurisdiction to cover specified swaps and other OTC derivatives but tailoring their regulation to the circumstances under which they trade and other appropriate factors; (2) excluding swaps and other specified OTC derivatives from the act’s jurisdiction and providing for their oversight, as appropriate, by other federal regulators; and (3) tailoring the level of regulation for exchange-traded futures to the nature of the market participants and/or other appropriate factors. Swaps and other OTC derivatives involve institutions and activities in which federal bank regulators and SEC have traditionally had a supervisory or oversight role, while futures trading and futures market regulation have fallen under the CFTC’s exclusive jurisdiction. As a result, any policy questions raised by the ongoing development of the OTC derivatives and exchange-traded futures markets cross traditional jurisdictional lines and involve not only CFTC but also federal bank regulators and SEC. The cooperative efforts of these agencies, working with the Department of the Treasury and the financial industry, will be required to address such questions. As discussed, the President’s Working Group on Financial Markets provides one forum through which to coordinate interagency activities and address policy questions that cross jurisdictional lines. As we concluded in our May 1994 OTC derivatives report, the U.S. financial regulatory structure has not kept pace with the dramatic and rapid changes in the domestic and global financial markets. We noted that one issue needing to be addressed is how the U.S. regulatory system should be restructured to better reflect the realities of today’s rapidly evolving global financial markets. Our conclusion was based partly on the finding that the development of new types of financial derivatives and their use by a variety of once separate industries, such as banking, futures, insurance, and securities, have made it more difficult to regulate them effectively under the current U.S. regulatory structure. The potential legal and regulatory issues raised by the evolving OTC derivatives and exchange-traded futures markets under the CEA further illustrate such difficulty and reinforce the need to examine the existing U.S. regulatory structure. Ultimately, maintaining a globally competitive U.S. derivatives market will require balancing the goal of allowing the U.S. financial services industry to innovate and grow with the goal of protecting customers and the market, including its efficiency, fairness, and financial integrity. We requested comments on a draft of this report from the heads, or their designees, of CFTC, the Department of the Treasury, the Federal Reserve Board, the Office of the Comptroller of the Currency, and SEC. We also requested comments from three futures exchanges (the Chicago Board of Trade, Chicago Mercantile Exchange, and New York Mercantile Exchange), the New York Stock Exchange, and four industry associations (the Futures Industry Association, International Swaps and Derivatives Association, Managed Futures Association, and National Futures Association). CFTC, the Department of the Treasury, the Federal Reserve Board, and SEC provided us with written comments under a joint response as members of the President’s Working Group on Financial Markets. We also obtained written comments from two futures exchanges (the Chicago Mercantile Exchange and Chicago Board of Trade) and the four industry associations. The written comments and our additional responses are contained in appendixes I through VII. We did not receive written comments from the Office of the Comptroller of the Currency, New York Mercantile Exchange, or New York Stock Exchange. In addition, officials from CFTC, the Department of the Treasury, the Federal Reserve Board, the Office of the Comptroller of the Currency, SEC, the International Swaps and Derivatives Association, and the Chicago Mercantile Exchange provided us with technical comments that were incorporated into the report as appropriate. The President’s Working Group on Financial Markets commented that it agreed with our conclusion that maintaining a globally competitive U.S. derivatives market requires properly balancing the need to allow the U.S. financial services industry to innovate and grow with the need to protect the financial integrity of our markets. The Working Group noted that it is effectively addressing intermarket financial coordination issues and that further discussion in that forum of issues we identify would be useful. The Futures Industry Association commented that the draft did not adequately address the question of whether or to what extent additional regulation of the OTC derivatives markets is warranted, and to the extent warranted, whether the CEA is the appropriate vehicle for such regulation. Similarly, the International Swaps and Derivatives Association stated that there has been no demonstration that participants would benefit from subjecting swaps to any form of regulation under the CEA. Our overall objective was to provide Congress with information on the legal and regulatory issues involving the CEA, not to determine the appropriate level of regulation for the OTC derivatives market or the specific vehicle for any such regulation. We identified regulatory gaps in this market in our May 1994 report on OTC derivatives and recently issued a report that discusses the actions taken by federal regulators and the industry since that time. Nonetheless, the issues that we discuss lead to the broader policy question of what the appropriate regulation is for the OTC derivatives and exchange-traded futures markets. In our conclusions, we provide a framework for addressing this policy question and, in turn, related questions, such as whether the CEA is the appropriate vehicle for regulating swaps and other OTC derivatives. In a related comment, the Futures Industry Association noted that our draft asserts that financial products serving a risk-shifting function should be subject to similar regulatory treatment, even though the CEA has recognized through its statutory exclusions that the regulation of such products may appropriately differ depending on their nature and underlying market. Correspondingly, the International Swaps and Derivatives Association commented that risk-shifting activities related to foreign exchange and other transactions were specifically excluded from the CEA pursuant to the Treasury Amendment, demonstrating that Congress did not intend for the CEA to govern all financial transactions involving the transfer of risk. We do not assert that risk-shifting contracts should be subject to similar regulation. Rather, we note that the CEA covers futures contracts, which have been defined in a way that reflects their risk-shifting function. As a result, OTC derivatives serving a similar risk-shifting function as futures may fall within the definition of a futures contract and be subject to the CEA. We agree that the CEA’s statutory exclusions demonstrate that Congress did not intend for the CEA to govern all risk-shifting contracts. However, these exclusions are not broad enough to provide similar treatment for all OTC derivatives, many of which, including swaps, did not exist when the exclusions were created. CFTC has exempted most swaps and other OTC derivatives from virtually all the CEA’s requirements to provide them with greater legal certainty, but a question remains about whether swaps are futures and subject to the CEA. As we discuss, the possibility that swaps are futures continues to be a source of legal risk for equity swaps. In another related comment, the Chicago Board of Trade, Futures Industry Association, and Managed Futures Association noted that the CEA provides CFTC with the authority and flexibility to address issues raised by the evolving OTC derivatives and futures markets. We agree that the CEA, with its exemptive authority provision, does not prevent CFTC from addressing regulatory concerns raised by the OTC derivatives market, as needed. In our report, we state that CFTC could use its exemptive authority to address regulatory concerns raised by a swaps market development that is inconsistent with the conditions of the existing swaps exemption. However, we note that this approach could suggest that swaps are futures and introduce jurisdictional questions. We also note that the President’s Working Group on Financial Markets provides one forum through which to address such questions. The Chicago Board of Trade commented that, contrary to the impression created in the draft, jurisdictional ambiguities in the act (the definition of a futures contract and the Treasury Amendment) are not solely responsible for the disparate regulatory treatment of exchange and OTC markets. Instead, it cites the manner in which CFTC has chosen to use its authority as leading to this disparity. According to the exchange, CFTC did not use its exemptive authority in a way that is consistent with the 1992 act’s legislative history—that is, it did not use its authority in a fair and even-handed manner to products and systems sponsored by exchanges and nonexchanges. In a related comment, the Futures Industry Association noted that it agrees with the draft report’s implicit assumption that CFTC’s exchange exemption could be broadened. However, it stated that the exchanges must provide CFTC with greater specificity as to the nature of the products, trading mechanisms, and clearing structure that would be subject to exemptive relief. We agree with the Chicago Board of Trade that the act’s jurisdictional ambiguities are not solely responsible for the regulatory differences between the OTC derivatives and futures markets. Our report states that CFTC provided less regulatory relief under its exchange exemption than it did under its OTC derivatives exemptions. We also agree with the Futures Industry Association that greater specificity could aid CFTC in the use of its exemptive authority to provide additional regulatory relief to the exchanges. However, in granting the exchange exemption, CFTC followed the congressional admonition to use its exemptive authority sparingly and not to cause a wide-scale deregulation of markets falling under the act. Given the different ways of interpreting the 1992 act’s legislative history, we note in our conclusions that one of the unresolved issues involving the CEA is the extent to which CFTC should use its exemptive authority to provide greater regulatory relief to the futures exchanges. We are sending copies of this report to the Chairperson of CFTC, the Comptroller of the Currency, the Chairman of the Federal Reserve Board, the Chairman of SEC, the Secretary of the Treasury, and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-8678 or Cecile O. Trop, Assistant Director, at (312) 220-7600 if you or your staff have any questions. Major contributors to this report are listed in appendix VIII. The following are GAO’s comments on the Chicago Mercantile Exchange’s August 30, 1996, letter. 1. The Chicago Mercantile Exchange commented that our use of the way that derivatives are traded (off-exchange versus on-exchange) as the basis for distinguishing OTC derivatives from futures for regulatory purposes is not an appropriate dichotomy. Rather, the exchange commented that the nature of the market participant (professional versus retail) is a better basis to use in determining the appropriate level of regulation needed for derivatives markets. We revised our report, and the referenced text no longer appears. In our conclusions, we provide a framework for determining the appropriate regulation for the OTC derivatives and exchange-traded futures markets, focusing on the current public interest in these markets that needs to be protected. As part of that framework, we note that the nature of the market participant, trading environment, and other factors should be considered in determining the regulations that are needed to protect the public interest. The following are GAO’s comments on the Futures Industry Association’s September 18, 1996, letter. 1. The association commented that the draft focused on the similar economic function served by OTC derivatives and exchange-traded futures but did not adequately address the policy implications arising from the important distinctions that exist between the two types of products. We focus on the similar risk-shifting function served by OTC derivatives and exchange-traded futures because the CEA covers futures, which have been defined in a way that reflects their risk-shifting function. As we discuss in our conclusions, Congress and federal regulators will need to consider the similarities and differences between the OTC derivatives and exchange-traded futures markets in addressing the broader policy question concerning the appropriate regulation for these markets. We agree that important distinctions exist between OTC derivatives and exchange-traded futures that have policy implications, and we amplified our discussion of these distinctions. 2. The association commented that our draft cited the CEA as embracing the principle of functional regulation. We eliminated the term functional regulation because of the confusion over its meaning, but our message has not changed. That is, the CEA covers futures, which CFTC and the courts have defined in a way that reflects their risk-shifting function. As a result, contracts serving a similar risk-shifting function as futures may fall within the definition of a futures contract and be subject to the CEA. 3. The association commented that our draft report listed the necessary elements of a futures contract without mentioning that such elements are not necessarily sufficient to define a futures contract. We modified the report accordingly. 4. The association commented that section 3 of the CEA specifically identifies transactions in contracts for future delivery “commonly conducted on a board of trade” as the type of activity requiring regulation under the CEA. It further noted that this statement reflects a sensitivity to the regulatory significance of distinctions between exchange trading and private negotiation of contracts that is equally relevant today. We agree that the exchange-trading requirement is central to the CEA’s regulatory structure and recognize that differences exist between OTC derivatives and exchange-traded futures that may warrant differences in their regulation. In that regard, our conclusions provide a framework for determining the appropriate regulation for the OTC derivatives and exchange-traded futures markets, focusing on the public interest in these markets that needs to be protected. As part of that framework, we note that the nature of the market participant, trading environment, and other factors should be considered in determining the regulations needed to protect the public interest. 5. The association noted that the draft report overstated the current level of convergence between the OTC derivatives and futures market. We revised the report to amplify our discussion of the similarities and differences between the OTC derivatives and futures markets. 6. The association disagreed with the draft report’s observation that participation of dealers in the OTC derivatives markets implies that such markets are centralized. We did not intend to imply that the swaps market is centralized and have revised the draft accordingly. We recognize that swaps continue to be privately negotiated between counterparties and are neither traded on a centralized facility nor cleared through a clearinghouse. We note that swaps have followed a similar evolutionary path as exchange-traded futures. However, we recognize that the extent to which the swaps market, or some part thereof, will continue to evolve in the same way as the exchange-traded futures market is unknown. 7. The association commented that, with respect to the distinction between futures and forwards, our draft report sometimes fails to distinguish between the nature of the obligation to make delivery and what constitutes delivery. Our discussion of the disagreement between CFTC and the federal district court on where to draw the line regarding the delivery requirement for Brent Oil contracts was meant to illustrate this difference. We also note in our conclusions that one of the unresolved issues is the CEA’s lack of criteria for distinguishing unregulated forwards from regulated futures. 8. The association commented that the losses associated with hedge-to-arrive contracts do not appear to arise from the character of the delivery obligations. We note that unusual factors, such as high grain prices and poor weather conditions, have resulted in financial problems for parties to these contracts. However, we also note that the legal risk facing some hedge-to-arrive contracts due to the possibility that they could be illegal futures or trade options has complicated matters. This legal risk may persist, even in the absence of the factors contributing to financial risk. 9. The association commented that the Treasury Amendment’s scope was broader than the restricted view presented in the draft report. Our discussion of the Treasury Amendment was not intended to provide an interpretation of the amendment’s scope but rather to describe the legal confusion created by how others have interpreted its scope. We modified the report accordingly. The following are GAO’s comments on the International Swaps and Derivatives Association’s September 10, 1996, letter. 1. The association commented that the draft painted a misleading picture of the similarities between exchange-traded futures and swaps by focusing on their risk-shifting function and failed to properly address the important differences between them that justify their disparate regulatory treatment. We focus on the similar risk-shifting function served by OTC derivatives and exchange-traded futures because the CEA covers futures, which have been defined in a way that reflects their risk-shifting function. As we discuss in our conclusions, Congress and federal regulators will need to consider the similarities and differences between the OTC derivatives and exchange-traded futures markets in addressing the broader policy question concerning the appropriate regulation for these markets. We agree that important distinctions exist between OTC derivatives and exchange-traded futures that have policy implications, and we amplified our discussion of these distinctions. 2. The association commented that, although OTC derivatives and exchange-traded futures serve a similar risk-shifting function, many other financial transactions, including those involving securities, loans, guarantees, and various types of insurance contracts, can serve such a function. It further noted that attempting to implement a regulatory framework that would subject every form of financial or commercial activity that involves the transfer of risk to regulation under the CEA would clearly be inappropriate. We agree that it would be inappropriate to subject all instruments that can serve a risk-shifting function to the CEA. However, as CFTC and others have recognized, swaps and other OTC derivatives resemble futures not only in terms of their economic function but also in terms of their design. Given the market’s continued growth and development, questions remain about the extent to which additional regulation of the OTC derivatives market is needed. In our conclusions, we provide a framework for determining the appropriate regulation for the OTC derivatives and exchange-traded futures markets, focusing on the public interest in these markets that needs to be protected. 3. The association commented that the draft report’s assertion that swaps are a centralized market is not true. We did not intend to imply that the swaps market is currently centralized and have revised the draft accordingly. We recognize that swaps continue to be privately negotiated between counterparties and are neither traded on a centralized facility nor cleared through a clearinghouse. We note that swaps have followed a similar evolutionary path as exchange-traded futures. However, we recognize that the extent to which the swaps market, or some part thereof, will continue to evolve in the same way as the exchange-traded futures market is unknown. 4. The association commented that the draft report portrayed swaps as a centralized market by incorrectly asserting that swaps participants are actively discussing the possibility of establishing a swaps clearinghouse. We discuss the potential for a swaps clearinghouse to illustrate an example of a development that could trigger a greater federal interest in the market. It was not intended to suggest that the swaps market has evolved into a centralized market, and we revised the draft accordingly. 5. The association noted that more corporations use swaps than exchange-traded futures to meet their risk-management needs, disproving the draft report’s assertion that swaps and exchange-traded futures share the same general market participants. Our point was that swaps and exchange-traded futures are used by many of the same general types of market participants, not that swaps and exchange-traded futures are used by all of the same market participants. We revised the report to clarify this point. We still note that some of the same firms, namely banks and other financial firms acting as dealers, use both swaps and exchange-traded futures because of the complementary relationship of the contracts. 6. The association commented that few incidents exist where swaps participants believed that they were treated unfairly by their counterparties, which demonstrated both the ability of swaps participants to protect their rights and the fact that such incidents represent bilateral disputes with no implications for third parties. It added that the draft report offers no evidence that additional regulatory protection is needed or desired by swaps participants. We are currently reviewing OTC derivatives sales practices and will report our findings separately. 7. The association commented that the swaps activities of institutions that are thought to be subject to systemic risk and/or are supported by public insurance are closely supervised by various regulatory agencies. As we discussed in our May 1994 report on OTC derivatives, regulatory gaps existed in the OTC derivatives market that could heighten the potential for systemic risk. We have issued a report that updates our 1994 report and discusses actions taken by federal regulators and the industry since that time. 8. The association disagreed with the draft report’s assertion that the CEA, as amended in 1974, embraced the principle of functional regulation. While we eliminated the term functional regulation because of the confusion over its meaning, our message has not changed. That is, the CEA covers futures, which CFTC and the courts have defined in a way that reflects their risk-shifting function. As a result, contracts serving a similar risk-shifting function as futures may fall within the definition of a futures contract and be subject to the CEA. 9. The association commented that our draft report asserts “uncategorically” and without direct evidence that the legislative history surrounding the Treasury Amendment indicates that it was intended to apply solely to the interbank market. Our discussion of the Treasury Amendment was not intended to provide an interpretation of the amendment’s scope but rather to describe the legal confusion created by how others have interpreted its scope. We revised the report accordingly. 10. The association noted that our draft report listed the necessary elements of a futures contract without mentioning that such elements are not necessarily sufficient to define a futures contract. We modified the report accordingly. 11. The association commented that our definition of offset is broader than has been defined in regulatory and judicial contexts. We amended the offset definition to make it consistent with CFTC’s definition and discussed the way that OTC derivatives are terminated in a later section of the report. The following are GAO’s comments on the Managed Futures Association’s October 15, 1996, letter. 1. The association commented that it does not share some of our findings regarding the limitation of the Treasury Amendment’s carve-out of the interbank market, definition of a futures contract, and failure to recognize the continued noncentralized nature of the swaps market. We revised the report to clarify that we were not providing an interpretation of the Treasury Amendment’s scope, but rather were describing the legal confusion created by how others have interpreted its scope. We modified the report to clarify that no definitive list exists of all the elements of a futures contract. We also amplified our discussion of the differences between the OTC derivatives and exchange-traded futures markets. The following are GAO’s comments on the National Futures Association’s September 10, 1996, letter. 1. The association commented that the draft report minimized the inherent tension between the equally important goals of limiting legal certainty while maximizing regulatory flexibility. We agree that tradeoffs exist in addressing the legal and regulatory issues raised by the ongoing development of the OTC derivatives market under the CEA. Such tradeoffs raise difficult and often competing policy concerns that can lead to more fundamental questions concerning the goals of federal regulatory policy. In our conclusions, we provide a framework for determining the appropriate regulation for the OTC derivatives and exchange-traded futures markets, focusing on the public interest in the markets that needs to be protected. 2. The association noted that the swaps exemption must be periodically revisited to make sure that the conditions set by CFTC for the exemption continue to make sense. It also noted that CFTC must also reexamine the exemption granted to exchange-traded products. We agree that one alternative is to have CFTC revisit the exemptions, as needed, to address regulatory concerns raised by market changes and to ensure regulations do not impede market innovation and competition. However, we note that using such an approach for exempted swaps could suggest swaps are futures and introduce jurisdictional questions. Moreover, in our conclusions, we note that a remaining unresolved issue is the extent to which CFTC should use its exemptive authority to provide greater regulatory relief to the futures exchanges. 3. The association commented that, with respect to the Treasury Amendment, it is inconceivable that Congress intended for futures contracts in foreign currencies to be mass marketed to the retail public without any of the protections afforded under the CEA. As we discuss, confusion exists as to the scope of the Treasury Amendment, and we note in our conclusions that such confusion remains an unresolved issue under the CEA. Desiree W. Whipple, Reports Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the legal and regulatory issues surrounding the Commodity Exchange Act (CEA), focusing on: (1) the extent to which the Commodities Futures Trading Commission (CFTC) has reduced the legal risk surrounding the enforceability of over-the-counter (OTC) derivatives under the CEA; and (2) issues related to the appropriate regulation for exchange-traded futures and OTC derivatives contracts, including their markets and market participants. GAO noted that: (1) under the authority provided by the Futures Trading Practices Act of 1992, CFTC exempted most swaps and other OTC derivatives contracts from the CEA's exchange-trading requirement and thus reduced or eliminated the legal risk that they could be unenforceable; (2) in granting the exemptions, CFTC was not required to, and did not, determine that OTC derivatives were futures; (3) as a result, a question has remained about whether OTC derivatives are futures and can be regulated under the act: (4) the possibility that swaps are futures continues to be a source of legal risk for so-called equity swaps that are ineligible for exemption from the act's requirements; (5) legal risk also remains for certain agricultural forwards that are becoming increasingly difficult to distinguish from futures and that may not be eligible for the swaps exemption; (6) although CFTC reduced or eliminated the legal risk of being unenforceable for most swaps and other OTC derivatives, a broader policy question remains about the appropriate regulation for OTC derivatives and exchange-traded futures, including their markets and market participants; (7) the first issue concerns regulation for the OTC foreign-currency market under CEA; (8) the act excludes from its regulation certain OTC foreign-currency transactions, but the scope of the exclusion, called the Treasury Amendment, has been the subject of disagreement among federal regulators and the courts; (9) a recent U.S. Supreme Court decision resolved that the exclusion covers all transactions in foreign currency, including foreign-currency options and futures; (10) as a result, the extent to which the Treasury Amendment excludes transactions involving unsophisticated market participants may still be subject to debate; (11) the second issue concerns the potential for the swaps market to evolve beyond its exemption and raise additional regulatory concerns; (12) CFTC exempted swaps from virtually all CEA requirements, but imposed conditions on the exemption that restricted their design and trading procedures; (13) the swaps market might develop in ways that are inconsistent with these conditions; (14) the third issue concerns the rationale for the regulatory differences between the OTC derivatives and exchange-traded futures markets; and (15) CFTC recently granted the exchanges an exemption to enable them to better compete against the less regulated OTC derivatives market, however, under the exemption, regulation of the two markets will continue to differ substantially.
In 1988, NASD established a Public Disclosure Program to respond to written inquiries about brokers’ disciplinary histories. Two years later, in October 1990, Congress amended the Securities Exchange Act of 1934, Section 15A(i), to require that NASD establish and maintain a toll-free telephone number for the public to inquire about the disciplinary backgrounds of NASD-member brokers and their associated persons. The act also requires that NASD promptly respond to such inquiries in writing. In October 1991, NASD established its hotline, which is operated by NASD Regulation’s Public Disclosure Program. NASD initially provided hotline callers with information on final disciplinary actions of self-regulatory organizations (SRO) and federal and state securities regulators, as well as criminal convictions. In 1993, NASD expanded the types of information provided, partly in response to a recommendation in our 1993 report on penny stock regulation. The NASD Regulation Public Disclosure Program now is to provide callers with information on pending and final disciplinary actions taken by SROs or federal and state securities regulators that relate to securities or commodities transactions, including censures and fines, bars, revocations, expulsions, suspensions, orders of permanent injunction, orders of preliminary injunction, orders of prohibition, some special stipulation orders, cease and desist orders, and denial of registration orders; pending NASD Regulation and other SRO complaints and dismissed NASD securities arbitration decisions involving public customers and their brokers and Commodity Futures Trading Commission reparation orders; securities-related civil judgments; and criminal convictions and indictments. The information disclosed by the program is derived from the Central Registration Depository (CRD). CRD is a database, which NASD Regulation maintains, that contains employment and disciplinary histories of individual brokers as well as disciplinary actions taken against member broker-dealer firms. NASD and state securities regulators established CRD as a centralized licensing and registration system. Brokers are required to report to CRD formal disciplinary actions taken against them by the Securities and Exchange Commission (SEC), state securities regulators, SROs, or courts, including foreign entities, for violations related to the securities business and certain customer complaint and arbitration information. In addition to providing information on formal disciplinary actions, brokers are required to provide CRD with written notice of employment terminations. All required CRD information is to be reported within 30 days of the action’s occurrence. Federal and state securities regulators and SROs also are to report disciplinary information to CRD and can use CRD information to determine whether a broker has violated securities laws or SRO rules. State securities regulators also have programs through which CRD information can be disclosed to the public upon request. The public can obtain information either by submitting a written request on a NASD Information Request Form (NIRF) or by calling the toll-free hotline at 1-800-289-9999. The bulk of requests, over 90 percent as of November 1995, have been made through the hotline. NASD does not charge a fee when individuals request information to assist them in their personal investments. Business requests for information, such as those from attorneys or banks, must include a processing fee of $30. The hotline currently operates from 8:00 a.m. to 6:00 p.m. (eastern time). NASD Regulation officials said that they are considering extending the hotline’s hours to 8:00 p.m. (eastern time) to better accommodate west coast callers. As of January 1996, one and one-half full-time equivalent staff are dedicated to answer hotline calls. However, if call volume necessitates, 12 operators who normally answer calls to NASD’s general number are also available to answer hotline calls. In addition to the staff who answer calls, NASD Regulation’s Public Disclosure Program also employs specialist staff to research disciplinary files and determine whether the information is either disclosable or nondisclosable. The specialists are to respond to written requests for information, which the public makes by using a NIRF. They also are to prepare written summaries of the disclosable information that is included in a computerized system called the NIRF database. As a result, the NIRF database contains disciplinary histories from CRD records that the specialists have reviewed and determined to be disclosable. As of January 1996, NASD Regulation officials said that they had two full-time specialists. When a call is made to the hotline, NASD Regulation staff are to ask the caller for information to identify the subject of the inquiry such as name, address, or registration number. If the staff cannot identify the subject, they are to tell the caller and terminate the call. When the staff identify the subject, an automated search of the NIRF database determines if disclosable information exists. The staff are to send any disclosable information to the caller upon request. When the subject is identified in the NIRF database, but no disclosable information exists, the staff are to tell the caller and terminate the call. In addition, if the caller requests, staff are to send a letter stating that no disclosable information exists. If the subject is not identified in the NIRF database, an automated search of CRD determines if a record exists on the subject. When a record exists, the staff are to tell the caller that the file has to be reviewed to determine if disclosable information exists. The specialist staff are to review the file on the subject broker to determine whether the information in the file is disclosable, create a NIRF database file on the subject, and send a copy of any disclosable information to the caller. When disciplinary history information is sent to a caller about individual brokers who are employed with NASD member firms, NASD Regulation also is to send the brokers a copy of this information, without the requesters’ names. To obtain information on the accessibility of the NASD Regulation hotline, we interviewed NASD Regulation officials; reviewed NASD Regulation Public Disclosure Program policies and procedures, and related documents; reviewed the results of calls to the hotline requesting disciplinary information; and conducted surveys of hotline callers and state securities regulators. To obtain information on users’ perceptions of the hotline’s accessibility and usefulness, we surveyed a random sample of nearly 500 of the more than 7,100 callers to the hotline during December 1994 and January 1995 to whom NASD Regulation sent disciplinary information. From this sample, we randomly selected a subsample of 100 callers for further review to determine whether the information NASD Regulation provided met its disclosure policies. We also surveyed securities regulators of all 50 states, the District of Columbia, and Puerto Rico to determine what information those regulators disclosed to the public and how they informed the public of the existence of their disclosure programs. For detailed technical information on our surveys, see appendix I. The questionnaires used and the results of our surveys of NASD Regulation hotline callers and state securities regulators are shown, respectively, in appendixes II and III. We also discussed with NASD Regulation officials the status of its CRD redesign effort. We did our work in accordance with generally accepted government auditing standards between November 1994 and April 1996. We performed our work in New York, NY; the Washington, D.C., metropolitan area; and at NASD Regulation’s Public Disclosure Program in Rockville, MD. We obtained written comments on a draft of this report from NASD Regulation and oral comments from SEC, which are discussed and evaluated at the end of this report. NASD Regulation’s written comments appear in appendix IV. Since its inception in October 1991, many investors have called the NASD Regulation hotline. From year to year, the number of calls that NASD Regulation hotline staff handle has increased. Callers have been informed about the hotline by newspaper and magazine articles, brokers, securities regulators, friends, or business associates. However, these indirect methods of publicizing the hotline may not be successful in reaching large numbers of investors and, as a result, many investors may not know the hotline exists. More direct methods, such as including the hotline number on account documents, could help ensure that more investors are informed of the hotline. Most of our survey respondents found the NASD Regulation hotline accessible—about 84 percent said they reached the hotline on the first call. Also, most of these callers, 71 percent, were not placed on “hold” after reaching the hotline. Of the callers that were placed on “hold,” 64 percent said they spoke to a representative within 3 minutes. Most of the callers that were placed on hold, 73 percent, did not consider the wait too lengthy or cause for hanging up. Few respondents, 2 percent, were disconnected after reaching the hotline. According to NASD Regulation statistics, the number of calls to the hotline has increased since the hotline began operations in October 1991. Calls received by the hotline and those handled by NASD Regulation staff have more than doubled. The statistics show that in 1992, the first full year of its operation, the hotline received almost 40,000 calls, of which NASD Regulation hotline staff handled about 35,000. In 1995, the most recent full year of operation, the hotline received about 103,000 calls, of which NASD Regulation handled almost 100,000. Figure 1 shows the number of NASD Regulation hotline calls received and handled from January 1992 through December 1995. Information informing investors about the NASD Regulation hotline is available to investors through several indirect sources. According to NASD Regulation officials, the hotline is publicized in two NASD brochures on investor protection, newspaper and trade press articles, and by public speaking engagements of NASD officials. According to these officials, calls to the hotline increase after it is publicized. For example, after a CNN program publicizing the July 1993 expansion of the public disclosure program, call volume increased to more than 4 times the daily average, reaching a peak volume of about 1,200 calls a day. The officials said that NASD Regulation plans to use the Internet to publicize its toll-free number on an NASD home page and allow investors to submit requests for information on brokers and firms on-line before the end of 1996. Our survey of hotline callers showed that most callers to the hotline, about 80 percent, first became aware of the hotline either from newspaper and magazine articles; brokers; SEC, NASD, or state securities regulators; or friends, relatives, or business associates. Similarly, state securities regulators that we surveyed said that they publicize the availability of disciplinary information through public speaking engagements, agency brochures, press releases, and public service announcements on radio, television, and in the print media. The number of calls to the hotline indicates that efforts to publicize it have been successful in reaching many investors. According to NASD Regulation statistics, about 307,000 callers, including repeat inquiries, called the hotline from October 1991 through December 1995. However, these callers constituted less than 1 percent of the estimated 41 million U.S. investors who directly owned shares in a publicly traded company or a mutual fund as of 1992. All investors who know about the hotline may not necessarily call it, but the small number of callers in relation to the number of investors indicates that numerous investors still may not be aware of the hotline’s existence. The hotline provides information that could help investors avoid dealing with brokers that have disciplinary histories unacceptable to the investors. Therefore, all investors, particularly those opening new brokerage accounts, could use the information. SEC recognized this in its 1994 report on the hiring, retention, and supervisory practices of large securities firms. It recommended that SROs adopt rules requiring member firms to disclose to investors opening new accounts the availability of disciplinary information through the NASD Regulation hotline. One approach to ensure that larger numbers of investors are informed of the hotline might be similar to that taken under SEC penny stock rules.These rules require that, before transactions are completed, brokers must provide investors with a risk disclosure document that includes the NASD Regulation hotline number. Although a separate disclosure document may not be necessary for routine securities transactions, more investors could learn about the hotline if the hotline number were included on account opening documents or account statements that are sent to investors. Another way to make disciplinary information more accessible would be to provide it directly to the public through some electronic communications media such as the Internet, as has been suggested by the head of NASD Regulation. Our survey of NASD Regulation hotline callers showed that they were mostly very satisfied with the broker disciplinary information they received from NASD Regulation. However, they also responded that additional information, which NASD Regulation currently does not disclose, would be useful in assisting them to decide whether they wanted to do business with a particular broker. This additional information is already available to investors who contact most state securities regulators. NASD Regulation also does not inform hotline callers of the types of information that are not disclosed, unless the callers ask. As a result, callers may think they have all the relevant information on their brokers’ history when they do not. Our survey also showed that the NASD Regulation hotline has provided individual investors with information that they used to make investment-related decisions such as selecting a broker. Our sample of hotline callers to which NASD Regulation sent information comprised mostly individual investors who called on their own behalf—about 64 percent of the total respondents. Other survey respondents included family members or friends calling on behalf of individual investors, about 6 percent of the total; businesses, about 19 percent; and other callers—primarily prospective employees calling about a broker-dealer’s background—about 11 percent. Figure 2 shows the types of callers who used the hotline in our sample. Our survey showed that the primary reason respondents called the hotline was to determine whether a broker had a history of improper or illegal behavior. Hotline callers said that the information they received was a major factor affecting their decisions on authorizing their broker to make a securities transaction, opening a new brokerage account, deciding not to do business with a particular broker, or changing their broker. Most hotline callers that we surveyed said they were very satisfied with the services received, including the time it took to reach hotline staff (about 67 percent), the ability of the staff to locate the subject broker (72 percent), the courtesy and professionalism of the staff (about 73 percent), the length of time it took to receive NASD Regulation’s written response (about 55 percent), and the hours the hotline operated (about 62 percent). Only about 5 percent of the callers surveyed found our questions about the ability of the hotline staff to assist non-English speaking and hearing impaired callers applicable. Most of these were satisfied with the staff’s ability to assist both types of callers. A few callers, about 1 percent, hung up because they thought that the staff were not helpful or were discourteous. Just over half of hotline callers, about 54 percent, called only once during a recent year, while almost half called 2 times or more during the year to obtain disciplinary information. Most hotline callers responded that they rely primarily on the NASD Regulation hotline for disciplinary information on their broker. About 81 percent of callers said they did not obtain disciplinary information from a state securities regulator. The respondents to our survey said that additional information available in CRD, but not disclosed by NASD Regulation, could also be useful to help them make decisions about whether to do business with a particular broker. The types of nondisclosable information that at least 70 percent of respondents said they thought would be either very or somewhat useful included whether a broker was granted a license or registration with limitations, the subject of a settled civil court case, the subject of an SRO review to determine whether to continue or stop membership rights, the subject of a court decision involving a bankruptcy or lien, the subject of a pending arbitration case with a securities regulator, the subject of a settled arbitration case with a securities regulator, the subject of a settled customer complaint filed with a securities regulator, the subject of a pending customer complaint filed with a brokerage firm, the subject of a settled customer complaint filed with a brokerage firm, and the subject of a disciplinary action or termination by his or her employer. Fewer respondents thought that information on dismissed customer complaints and withdrawn arbitration cases would be very or somewhat useful—64 and 66 percent, respectively. As part of our review of the CRD and NIRF database files for 100 brokers that our survey respondents inquired about, we analyzed the extent and types of nondisclosable information recorded in CRD. We found nondisclosable information in 46 files. This information primarily involved pending arbitration cases, customer complaints, settled or withdrawn arbitration cases, or NASD Regulation fines of $1,000 or less. This is the same type of information that our survey respondents indicated would be useful. Unlike individual hotline callers, NASD member broker-dealers have access to all of this information for use in screening potential employees. Further, our survey of state securities regulators showed that, when requested, almost all reported they already disclose the information that NASD Regulation does not disclose. These regulators are electronically linked to CRD, and thus get the information they disclose from the same database that NASD Regulation restricts. Table 2 shows the number of states that reported they disclosed information that NASD Regulation currently does not disclose. Most of the state securities regulators said NASD Regulation should provide investors with the information that it currently does not disclose. For example, 49 thought that NASD Regulation should disclose whether a broker was the subject of a settled arbitration case, and 40 thought that NASD Regulation should disclose pending customer complaints. The state regulators said that they disclose the information because of their freedom of information laws and policies about investor protection and education. NASD Regulation officials said that NASD Regulation does not disclose all information, particularly that involving customer complaints, because such complaints have not been fully investigated and may be unfounded. In 1994, we recommended that SEC and NASD develop procedures to balance regulatory surveillance and public disclosure interests pertaining to disclosure of customer complaint information to regulators and investors. At that time, those organizations commented that release of unsubstantiated customer complaint information would raise due process and privacy concerns. NASD Regulation officials added later that release of the complaint information could damage a broker’s reputation and result in lawsuits. NASD can be subject to lawsuits from hotline activities although it has limited protection from liability if a “good faith” error is made in a disclosure. NASD Regulation officials pointed out that the potential for lawsuits has not affected NASD Regulation’s policy decisions about whether to disclose information. Officials of the North American Securities Administrators Association (NASAA), a lobbying group representing state securities regulators, told us that no state has ever been sued for disclosing disciplinary information.They said that their greater concern is being the subject of legal actions based on complaints by the public for not disclosing the disciplinary information. In an October 1995 public address, the Chairman, SEC, suggested that consideration be given to making unadjudicated customer complaints public for a limited time, for example, 2 years; after which complaints that were either not pursued by regulators or deemed without merit would be removed from the reporting system. After our fieldwork was completed, NASAA, the states, NASD Regulation, and securities industry representatives agreed to changes in the reporting of disciplinary information to CRD which could lead to disclosure of additional disciplinary information by the NASD Regulation hotline. To lessen brokers’ concerns about disclosing information that may involve unfounded allegations of wrongdoing, the changes would place limits on brokers’ reporting of customer complaints and arbitration and civil case settlements. Brokers would be required to report to CRD information on (1) customer complaints less than 2 years old that allege damages of $5,000 or more and (2) arbitrations and civil suits settled for $10,000 or more. Before being implemented, the changes have to be approved by SEC. Actual public disclosure of this additional information by the NASD Regulation hotline, which was approved by the NASD Board of Governors in March 1996, would also require SEC approval. NASD Regulation policy limits the information disclosed to hotline callers and includes no provision to routinely inform callers about any nondisclosable information. Hotline representatives’ instructions for responding to callers discuss only disclosable information. NASD Regulation’s written responses to callers are to include a list of the types of disclosable information but not the types of nondisclosable information. Our survey showed that hotline representatives did not inform about 73 percent of callers about the types of nondisclosable information. About 23 percent said the hotline representatives provided this information, and about 4 percent said they did not remember. NASD Regulation officials said that the 23 percent who were told about the types of nondisclosable information probably had asked specifically about it. Thus, some callers were informed about the types of nondisclosable information while others were not. This inconsistency may cause some callers to make investment-related decisions based on the incorrect belief that they have been given all relevant information. More complete disclosure of relevant information could help ensure that consistent information is provided to all hotline callers. The NASD Regulation hotline provides information to callers without quality assurance checks, such as independent review and testing of the information disclosed. In most cases that we reviewed, the information provided met NASD Regulation’s disclosure policy. However, in 13 of the 100 cases, we found that either disclosable information was not disclosed or nondisclosable information was disclosed. Having all relevant information can help investors make more informed decisions about their broker. Quality assurance checks such as independent review and testing of the information could help ensure that disclosures meet NASD Regulation policies. NASD Regulation disclosed information in accordance with its current disclosure policies in 87 of the 100 cases we reviewed. However, 13 cases contained a total of 47 discrepancies when compared with information in CRD. In 42 of the 47 discrepancies, information considered disclosable was not sent to the caller. In two discrepancies, information considered nondisclosable under current NASD Regulation disclosure policy was sent to the caller. The other three discrepancies involved data entry errors—two that had no effect on information disclosed to the caller, and one that provided the caller with the same disclosable information twice under two different dates. We found 31 of the 47 discrepancies in one case involving a request for information about a large national securities firm. Twenty-six of the 31 discrepancies were 1988 and 1989 arbitration cases that were listed in CRD but were not entered into the NIRF database. Four discrepancies were disclosable disciplinary actions that were not entered into the NIRF database, and one was the disclosable information that was entered into the NIRF database twice. The remaining 16 discrepancies occurred in 12 cases involving information requests about individuals or smaller securities firms. Twelve of these 16 discrepancies occurred in 8 cases when disclosable disciplinary actions were not disclosed to the callers. Two discrepancies, one in each of two cases, occurred when nondisclosable information was disclosed to callers. The final two cases involved data entry errors. Apart from a 1994 internal review of the Public Disclosure Program, NASD Regulation officials told us that they do not perform routine independent review and testing of the information disclosed to callers. We found that 17 discrepancies resulted from either judgment errors of NASD Regulation staff in determining whether information was disclosable or errors in entering data into CRD and the NIRF database. NASD Regulation staff corrected these errors during our review. For the other 30 discrepancies, including the 26 arbitration cases, NASD Regulation officials could not explain why the information had not been included in the NIRF database. However, NASD Regulation staff corrected these discrepancies by adding the information to the NIRF database. The discrepancies we found that NASD Regulation corrected show that independent review and testing of the information derived from CRD could help reduce errors and help ensure that all disclosable information is provided to callers. If NASD Regulation proceeds as planned to change its disclosure policy so that most of the disciplinary related information in CRD is considered disclosable, the chances for judgment errors by NASD Regulation staff in determining whether information is disclosable would diminish. Also, after the currently planned redesign of CRD is implemented, NASD Regulation officials expect that reports of disciplinary information will be prepared directly by querying CRD for disclosable information, rather than relying on staff judgments of whether CRD information is disclosable or nondisclosable. NASD Regulation’s ability to provide hotline callers with timely and complete information on brokers depends on how and when the information is reported to CRD. NASD Regulation officials said that in the absence of a systematic means in the current CRD to monitor timeliness of filings, they are concerned that it is possible that disclosures by brokers are not as timely as they should be. Also, according to the officials, current reporting of disciplinary information may not be as complete as it could be because all regulators are not obligated to report their disciplinary actions to CRD. They said that most regulators report directly into CRD electronically, or at least publish their disciplinary actions. For those regulators who publish their actions, NASD Regulation staff first are to review the publications and then enter the disciplinary information into CRD. During 1996 and 1997, NASD Regulation plans to implement a redesigned CRD. According to NASD Regulation officials, the new CRD will contain many improvements that will make the system more useful to member firms, regulators, and investors. The redesigned CRD is to feature fully electronic reporting by both broker-dealers and regulators that is intended to provide more accurate and timely disciplinary information, and database modifications to allow better analytical capability. For example, the officials anticipate that NASD Regulation or SEC should be able to better select broker-dealers for examination based upon analyses of sales representatives’ disciplinary records. The redesigned CRD is also to allow NASD Regulation to track the timeliness of disclosures by brokers. The NASD Regulation officials said that, as a result, the new CRD will upgrade the efficiency of the registration process, ensure more timely reporting of disciplinary information, and make the information easier for the public to understand because of its uniform reporting structure. NASD Regulation officials said CRD redesign is a large project that is being done in three phases over the next 2 to 3 years and is expected to cost about $57 million. According to NASD Regulation officials, broker-dealers will be on-line during 1996, and federal and state securities regulators and SROs, beginning in 1997. Although the number of hotline callers has grown since it was established in 1991, by 1995 the hotline was still used by only a small percentage of individual U.S. investors. Because NASD Regulation’s methods for publicizing the hotline may not be successful in informing large numbers of investors about the hotline, many may be unaware of the hotline’s existence or the valuable information available to its callers. Making more investors knowledgeable about the hotline could allow them to have better information on hand to assist them in making important investment-related decisions and also reduce the likelihood that they will become victims of unscrupulous brokers. This possibly could be done at relatively low cost by adding hotline information to already required, account-opening documents or to account statements. One step an NASD Regulation official has suggested is to make broker disciplinary information directly available to investors over the Internet. The effectiveness of the NASD Regulation hotline greatly depends on NASD Regulation’s willingness to fully inform investors of their brokers’ disciplinary records. By not disclosing information from CRD that most state securities regulators said they already disclose, NASD Regulation may be putting some of its hotline callers at a disadvantage if they do not know that they can call state regulators for the nondisclosable information. Providing all disciplinary-related information, including unproven pending allegations, raises a risk of unfairly tarnishing brokers’ reputations. While we recognize this risk and agree that proper risk management controls are needed, we also believe that protecting potential investors and the integrity of securities markets are equally important goals. Further, the CRD reporting changes that NASD Regulation and state regulators have agreed to make are intended to help protect brokers’ reputations. Under NASD Regulation’s current disclosure program, NASD Regulation staff have to review disciplinary information and make judgments about whether information is disclosable. This and other problems have resulted in instances when callers were not provided with all of the disclosable information about their brokers or were provided with information that should not have been disclosed. Quality assurance checks such as independent review and testing of the information derived from CRD would help ensure that errors are corrected and all disclosable information is provided to callers. To help ensure that all relevant information is made available to as many investors as possible, we recommend that the Chairman, SEC, encourage and support NASD Regulation efforts to explore other ways of publicizing the hotline to a wider audience of investors, such as including the hotline number on account-opening documents or account statements, and making disciplinary related information directly available to investors through the Internet. provide hotline callers with all the relevant disciplinary-related information available in CRD, such as whether a broker is the subject of a customer complaint, a settled arbitration, or a settled civil case; if NASD Regulation does not disclose this additional information, it should at least inform callers that the information is available from most state regulators. develop and implement cost-effective quality assurance checks, such as independent review and testing of information derived from CRD, to ensure that information provided to hotline callers is disclosable and complete. We provided a draft of this report to NASD and SEC for review and comment. We obtained written comments from NASD Regulation (see app. IV). We obtained oral comments from SEC’s Division of Market Regulation and Office of Compliance Inspections and Examinations in a meeting on July 23, 1996. NASD Regulation was pleased that our review showed a high degree of user satisfaction with the telephone hotline. It generally agreed with our findings and conclusions and said it had already begun, or plans to begin, actions that would result in implementation of our recommendations. In response to our recommendation to explore other ways of publicizing the hotline to a wider audience of investors, NASD Regulation noted actions that it is taking to further publicize the hotline. It stated that it plans to provide a means through the Internet for investors to access electronically the data in the CRD after full implementation of the redesigned CRD in 1998. In addition, NASD Regulation said it has established an Office of Individual Investor Services that will actively promote and publicize the availability of disciplinary information through its Public Disclosure Program. NASD Regulation also stated that its membership committee plans to give full consideration to including the hotline number on account-opening documents or account statements. In response to our recommendation to provide hotline callers with all the relevant disciplinary related information available in CRD, NASD Regulation said that the NASD Board of Governors has approved the expansion of the Public Disclosure Program and will file the appropriate amendments with SEC in August 1996. In response to our recommendation to develop and implement cost-effective quality assurance checks, NASD Regulation said that it has introduced a revised process to ensure the accuracy of disclosure reports. It said that all new disclosures are reviewed by a second staff person and that a statistical quality control process will be instituted to measure systematically the accuracy of the program. In addition, NASD Regulation said that the program will be subject to periodic independent audits by its Internal Review group. SEC generally agreed with our findings and conclusions and expressed support for the types of changes that we recommend. SEC suggested several technical changes that have been made where appropriate. As agreed with you, unless you publicly release its contents earlier, we plan no further distribution of this report until 5 days from its issue date. At that time we will provide copies to the Chairman, House Committee on Commerce; the Chairman, Subcommittee on Telecommunications and Finance; the Ranking Minority Member, Committee on Commerce; other interested committees and subcommittees; SEC; NASD; and other interested parties. We will also make copies available to others upon request. Major contributors to this report are listed in appendix V. Please contact me on (202) 512-8678 if you have any questions about this report. To answer questions about the accessibility and usefulness of the NASD hotline, we surveyed a sample of hotline callers who inquired about brokers and were mailed disciplinary history information from the disclosable portion of CRD records. To review the completeness of the information disclosed, we compared the CRD records of a subsample of 100 of the subjects of these inquiries to the information NASD disclosed to the hotline callers. In addition, we surveyed all state securities administrators to help document the differences in disclosure policies and to determine the states’ publicity efforts. The NASD hotline customer satisfaction survey and the survey of state securities regulators and their results are shown in appendixes II and III, respectively. To obtain representative and precise estimates of the levels of customer satisfaction, completeness of disclosure, and accuracy of hotline information, we first needed to draw random samples of callers and the subjects they asked about from a complete listing of all callers and subjects, without duplications, omissions, or ineligible entries. We first drew an initial unstratified random sample of 552 of all 7,176 response letters produced by NASD in answer to investor inquiries, as recorded in the NIRF database from December 1, 1994, through January 31, 1995. We chose this period, the most recent possible, because we wanted to measure caller opinions with the minimum possible memory loss. After examining the characteristics of the information requests made in this period, and consulting with NASD, we determined that these inquiries were typical of recent NASD hotline activity. The sample frame, and our initial sample, contained some responses to requests that we deemed ineligible for our study. We removed from our initial sample any requests for information identified by the NIRF database record to have been made by firms—banks, law firms, broker-dealers—and other requesters acting as agents for private firms. For the caller survey, it was our aim to learn about the experiences of the individual public investor. Unfortunately, we could only remove those callers who clearly identified themselves to hotline personnel as private sector callers and were recorded in the NIRF database as firms. Approximately 11 percent of the elements in our initial sample were identified as private-sector requests. An undetermined number of callers self-identified themselves as public requesters, yet may have represented firms in some capacity. In addition, we attempted to remove all inquiries made by the subjects themselves—registered representatives calling to request a copy of their own disciplinary history—because they would not be typical of the individual public investor. For the caller survey, we also removed multiple inquiries made by the same caller about different subjects. Finally, we removed from the caller survey sample any requests that were made in writing, rather than in a phone call to the toll-free hotline. After removing these ineligible cases from our first sample of 552, we were left with an adjusted sample size of 448 NASD responses to caller inquiries. Then, we drew a supplemental sample of 58 from the initial 7,176 response letters, of which 40 remained after removing ineligible elements. This left us with an adjusted sample size of 488. Furthermore, while collecting data from this sample, we discovered that an additional 5 were also ineligible for some of the reasons mentioned above, leaving us with a final sample size of 483 eligible sampled elements. See table I.1 for a more complete description of the dispositions of the mail survey sample. Initial sample selected before adjustments Number of elements in first sample Number of elements in supplemental sample Total initial sample before adjustments Initial sample elements found to be outside study population Requests made by firm and nonpublic requesters Multiple requests made by requester already in sample Requests made by registered representatives Requests initiated by NASD personnel, foreign addresses Other ineligible elements found during survey period Final disposition of eligible sample elements Eligible elements (total initial sample minus total ineligibles) Undeliverable (No valid address) Attempted telephone contact for follow-up interview Unable to contact by telephone after five attempts Completed mail questionnaires and telephone interviews 81 (percent) For the survey of hotline callers, we developed a mail questionnaire (shown in app. II) to measure callers’ satisfaction with their contact with hotline personnel and the information they received by mail from NASD. We also included questions to collect background information on the callers, their reasons for calling the hotline, and how they learned of the hotline. To ensure that the survey would collect the intended data, the questionnaire was pretested with actual investors from New York and Virginia, whom we identified from our listing of the hotline-caller population. In late April 1995, we mailed questionnaires to all 483 investors in our final sample of callers. In the third week of May 1995, we mailed replacement questionnaires to the sampled callers who had not yet responded. After an additional 6 weeks, we began to make follow-up telephone calls to almost all (171) of the hotline callers in our sample who had not yet responded. In these contacts with nonrespondents, we used a telephone questionnaire to collect answers to some of the more important survey questions from the mail questionnaire. We made up to five attempts to reach the nonrespondents by telephone. See table I.1 for the final dispositions of the 171 nonresponse follow-up cases. In August 1995, we closed out the telephone follow-up effort, having received an additional 96 usable responses, for a total of 390 usable responses. This represents an overall response rate of 81 percent. Because we surveyed only one of a large number of possible samples of caller inquiries to develop the statistics in this report, each of the population estimates made from this sample has a sampling error, which is a measure of the precision with which the estimate approximates the population value. The sampling error is the maximum amount by which estimates derived from our sample could differ from estimates from any other sample of the same size and design, and is stated at a confidence level, in this case of 95 percent. This means that if all possible samples were selected, the interval defined by their sampling errors would include the true population value 95 percent of the time. In addition to the reported sampling errors, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, in the sources of information that are available to respondents, or in the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages for the purpose of minimizing such nonsampling errors. To make the comparison of information available in the NIRF database to the full CRD, we drew a random subsample of 100 of the registered representatives and broker-dealers who were the inquiry subjects from our first sample of 552 hotline callers (see table I.1). After removing seven duplicate inquiry subjects (in which the same broker dealer was the subject of more than one sampled inquiry), and drawing another seven replacement subjects, we proceeded to collect CRD data on a total of 100 eligible subjects. For each of the subjects, we completed a data collection instrument summarizing the subject’s recent disclosable and nondisclosable disciplinary history. Our goal was to determine whether hotline callers received the correct and complete information in accordance with NASD’s disclosure policies. For the comparison sample, our data collection instrument covered disciplinary actions found on CRD from January 1, 1990, through January 31, 1995. For information we found on CRD that was not disclosable, we documented the type of action, the allegation, and if applicable, the dollar amounts being contested. We did not validate the accuracy of any of the information found in the CRD. Because we reviewed only one possible sample of CRD subject records, our estimates for the body of NIRF database records as a whole is subject to the same sampling and nonsampling errors as described above for the Hotline Customer Satisfaction Survey. For the survey of state securities regulators, we obtained a list of state securities administrators in all 50 states, Puerto Rico, and the District of Columbia. This list was produced by the North American Securities Administrators Association and was dated February 6, 1995. We mailed out 52 questionnaires in early May 1995. When the survey was closed out in September of 1995, we had received a total of 51 completed surveys. Because the survey of state securities regulators covered all elements of this population, this component of our research is not subject to sampling errors as described above. Nonsampling errors, however, can affect any survey. The following are GAO’s comments on NASD’s July 16, 1996, letter. 1. NASD said that we should rephrase our recommendation to urge SEC to approve its proposed rule as soon as it is filed with SEC. Our recommendation meets our intent to ensure that investors get the information they need to make informed investment decisions. It would be premature to make the recommendation as specific as NASD suggests until its rule amendments are filed with SEC. 2. NASD said that we should emphasize the extent to which users reported high levels of satisfaction with the service they receive when they use the hotline. Text was modified to include the percentage range of those who responded very satisfied. 3. Text was added to note the 1988 establishment of the NASD Public Disclosure Program. 4. Caption and text were modified to state that the methods used to publicize the hotline may not reach all investors. 5. NASD noted variations on the handling of formal complaints and customer complaints and suggested that were clarify what is meant by dismissed customer complaints. To eliminate the confusion about the definition of dismissed customer complaints, we have changed the example to pending customer complaints. 6. Text was revised to include NASD’s recommended language regarding the absence of a systematic means in the current CRD to monitor the timeliness of filings. 7. NASD recommended that we distinguish firms from individuals throughout the report rather than use the term “broker” as explained in a footnote on page 1. We have carefully reviewed every instance in which we use the term “broker” to refer to both broker-dealers and their individual associated persons. In every case, the term broker refers to both. We distinguish between the two only when we refer to either one or the other. 8. NASD asked that we use NASD Regulation throughout the report to refer to the entity responsible for the hotline. We added a footnote explaining the restructuring of NASD and refer to NASD Regulation where appropriate. Bernard D. Rashes, Assistant Director John D. Carrera, Senior Evaluator Despina Hatzelis, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the effectiveness of the National Association of Securities Dealers (NASD) hotline, focusing on: (1) investors' accessibility to the hotline; and (2) whether the information provided by the hotline meets NASD disclosure policies. GAO found that: (1) over 300,000 investors have used the NASD Regulation hotline to obtain background information and disciplinary histories on their brokers; (2) less than 1 percent of the investors using the hotline own shares in publicly traded companies or a mutual fund; (3) most investors make investment-related decisions without using the NASD hotline; (4) the hotline does not disclose information related to a broker's involvement in civil cases, arbitration, or customer complaints; (5) these allegations are not disclosed until they are proven; (6) most state securities regulators disclose brokers' disciplinary histories to those investors that request background information; (7) the amount and type of information an investor receives depends on who the investor calls; (8) most of the disclosed information meets NASD disclosure policies; (9) NASD does not routinely verify whether an investor receives all of the information it requested; (10) NASD did not comply with disclosure polices in 13 cases; (11) there were 42 instances in which NASD failed to disclose complete background or all of the disclosable information, and 2 instances in which it disclosed more information than necessary; and (12) NASD Regulation is redesigning its central registration depository (CRD) to provide more accurate and timely disciplinary information.
IRFs generally do not receive appropriations directly. Instead, they are accounts that may receive reimbursements and advances from other federal accounts. In addition, they may accept fees collected from nonfederal sources for the sale of government products or services. The use of IRFs to fund consolidated or shared services allows agencies to benefit from economies of scale or take advantage of specialized expertise that they may not have. The market-like atmosphere promoted by IRF-supported services is intended to create incentives for federal customers and managers to exercise cost control and economic restraint. IRF management affects the success of the programs they support. Within the Department of Commerce (Commerce), there are six IRFsthat support either management and administrative services—such as building security and human capital management—or specialized services based on the unique nature of the agency’s mission. For example, Census maintains a nationwide survey infrastructure and has expertise and address lists that would be uneconomical for others to replicate. Thus, it conducts surveys on behalf of other organizations (e.g., Department of Housing and Urban Development’s (HUD) American Housing Survey and the Bureau of Labor Statistics’ (BLS) Consumer Expenditure Survey). The Commerce Departmental and the Census WCFs were established to support services and projects that are performed more advantageously when centralized, such as information technology services and acquisition management. The statutory authority requires both funds to charge rates that recover agencies’ actual costs of operations. Customers of both WCFs either pay in advance or reimburse the fund depending on the terms of the agreement. Three entities play important roles in the management of the Commerce Departmental WCF. First, the Commerce Office of Executive Budgeting (OEB) is responsible for overall management of the Commerce Departmental WCF. Second, the algorithm review group, which includes representatives from the fund’s customer bureaus and OEB, convenes every other year to review rate-setting formulas. At this meeting, service providers present their billing methods and implement any changes to the rate-setting formulas that are agreed upon within the group. Finally, the Commerce Chief Financial Officer Council (Commerce CFO Council) is comprised of the CFOs from each of Commerce’s bureaus, giving each customer bureau a “seat at the table.” This Commerce CFO Council has an important role related to WCF increases and changes to the algorithms used to determine charges. It meets at least annually to review and update service rates, but may meet more frequently as needs arise. Management responsibility for the Census WCF is delegated across various divisions of the bureau. For example, the Budget Division leads the setting and reviewing of service rates each year as well as fund reconciliation. The Finance Division records and tracks customer charges and payments in the Commerce Business System (CBS), which is the financial system used throughout the Department of Commerce. The Acquisition Division reviews, approves, and tracks the status of interagency agreements with Census’ external customers. The program offices—also referred to as the sponsoring divisions—are responsible for the day-to-day management of the agreements and build relationships with the customer agencies. Both the Commerce Departmental and Census WCFs primarily support centralized management and administrative services (M&A) for their respective bureaus and programs. For example, almost all of the Commerce Departmental WCF collections support centralized M&A services for its 13 bureaus. In contrast, about half of the Census WCF collections support M&A services for its internal divisions; most of the remaining collections support survey-related services Census performs for other federal agencies and a small share is provided to nonfederal entities. This range of activities complicates management of the fund. Accordingly, Census maintains separate fund components to account for these different activities. Figure 1 illustrates the flow of funds from customers into the Commerce Departmental WCF for the provision of M&A services. The majority of activity is attributed to services provided by four offices—General Counsel, Human Resources Management, Security, and Administrative Services. Customers of the Commerce Departmental WCF are billed directly for services provided. A combination of mostly federal customers, including internal Census divisions/offices, pay into each of the various components of the Census WCF: Reimbursable, Cost Collection, and Cost Allocation. As shown in figure 2, the Reimbursable component supports services purchased by a single federal agency or nonfederal entity, such as the American Housing Survey for HUD, or services for New York City and Duke University. The Cost Collection component supports services where multiple federal agencies or customers share the costs and benefits of a single project, such as the Current Population Survey.indirect costs for customers are distributed among separate components of the Census WCF: direct costs are distributed to the Reimbursable and Cost Collection components whereas indirect costs are distributed to the Cost Allocation component. We identified four key operating principles that offer a framework to effectively manage WCFs. As previously discussed, to identify key principles, we reviewed governmentwide guidance on business operating principles, internal controls, managerial cost accounting, and performance management. In addition, we met with staff from the two WCFs and OMB to obtain their views on the use of these principles to assess WCFs. Commerce Departmental, Census, and OMB staff generally found the principles to be reasonable. Moreover, we considered our past work. The significance of these four principles is described below. 1. Clearly Delineate Roles and Responsibilities: Appropriate delineation of roles and responsibilities promotes a clear understanding of who will be held accountable for specific tasks or duties such as authorizing and reviewing transactions, implementing controls over WCF management, and helping ensure that related responsibilities are coordinated. In addition, this reduces the risk of mismanaged funds and tasks or functions “falling through the cracks.” Moreover, it helps customers know who to contact in the event they have questions. 2. Ensure Self-Sufficiency by Recovering the Agency’s Actual Costs: Transparent and equitable pricing methodologies allow agencies to ensure that rates charged recover agencies’ actual costs and reflect customers’ service usage. If customers understand how rates are determined or changed including the assumptions used, customers can better anticipate potential changes to those assumptions, identify their effect on costs, and incorporate that information into budget plans. A management review process can help to ensure the methodology is applied consistently over time and provides a forum to inform customers of decisions and discuss as needed. 3. Measure Performance: Performance goals and measures are important management tools applicable to all levels of an agency, including the program, project, or activity level. Performance measures and goals could include targets that assess fund managers’ responsiveness to customer inquiries, the consistency in the application of the funds’ rate-setting methodology, the reliability of cost information, and the billing error rates. Performance measures that are aligned with strategic goals can be used to evaluate whether, and if so how, WCF activities are contributing to the achievement of agency goals. A management review process comparing expected to actual performance allows agencies to review progress towards goals and potentially identify ways to improve performance. 4. Build in Flexibility to Obtain Customer Input and Meet Customer Needs: Opportunities for customers to provide input about WCF services, or voice concerns about needs, in a timely manner enable agencies to regularly assess whether customer needs are being met or have changed. This also enables agencies to prioritize customer demands and use resources most effectively, enabling them to adjust WCF capacity up or down as business rises or falls. By incorporating these principles in written guidance, agencies promote consistent application of management processes and provide a baseline for agency officials to assess and improve management processes. Moreover, agencies can use the guidance as a training tool for new staff and as an information tool for customers, program managers, stakeholders, and reviewers. Figure 3 summarizes the four principles and their underlying components. The responsibility for managing and overseeing aspects of the Commerce Departmental and Census WCFs is segregated across a number of offices and entities, thus minimizing the risk of error in fund management. However, neither agency’s WCF guidance includes complete information on the roles and responsibilities of all key personnel. The Commerce Department’s Office of Executive Budgeting has created a working environment that promotes communication, according to customers and service providers. This has resulted in a clear understanding among Commerce Departmental WCF managers, service providers, and customers about the roles and responsibilities of key personnel who manage the Commerce Departmental WCF. Customers and service providers we interviewed said that OEB is where they go to get answers or raise concerns. In addition, customers and service providers said they communicate directly with each other or through OEB about services they receive and rates charged. Service providers expressed appreciation for OEB’s role in facilitating and coordinating regular communication between the service providers and customers. For example: two of the four service providers said they interact with OEB on a daily basis, and all four service providers said that communication occurs on at least a monthly or quarterly basis through meetings or status reports. However, while the Commerce Departmental WCF handbook includes the roles and responsibilities of many key personnel and review groups involved with fund management, it leaves out information on the cross- departmental role of the Commerce CFO Council, which is comprised of the CFOs from each of Commerce’s bureaus and has an important role regarding increases or changes to the WCF. The absence of this entity from the handbook results in an incomplete reflection of the process and a missed opportunity to promote understanding by new staff and customers. In contrast to the centralized management of the Commerce Departmental WCF, management responsibilities for the Census WCF are delegated across several divisions including the Census Budget, Finance, and Acquisition Divisions. Although decentralization provides segregation of duties, Census does not have a formal process to coordinate and consolidate information managed by these disparate divisions to provide a corporate view of the WCF. In addition, information about the roles and responsibilities of Census management is incomplete, spread across three documents, and contains varying levels of detail and clarity. For example, the Census WCF Manual lists key personnel responsible for management of the WCF but does not describe their duties and responsibilities or provide specific contact information. This limits the usefulness of the guidance for bureau staff, customers, and other stakeholders. For example, one of the Census WCF’s larger customers we interviewed was unsure who to speak with about questions relating to service needs (e.g., the level of service to expect and the wait times before receiving services) and suggested that the Census WCF develop guidelines about service needs and expectations. The Commerce Business System (CBS), which is the financial system used throughout most of Commerce, does not provide a mechanism to record the period of availability of appropriations advanced from customer agencies. The Commerce Department advised us that both WCFs accept advances or reimbursements. When customers pay in advance, those advances have not yet been “earned” in performance of an agreed-upon service and still retain the period of availability from the original appropriation. If the providing agency were to obligate against advanced funds after the appropriation account closes, the customer agency would be required to transfer currently available funds to the WCF. If the customer does not have such funds available, they could be exposed to possible Anti-Deficiency Act violations. Thus, to appropriately manage the use of funds, agencies need a way to track whether funds remain available for purposes of the interagency agreement when it bills against the advance. Similar to what we found in our prior work at NIST, Census tracks customer funds by the period of performance, which may not always coincide with the availability of the funds. Although customer agencies bear ultimate responsibility for proper use of their funds, we have previously reported that the performing agency shares responsibility with its customer agencies to ensure the proper use of federal funds when entering into interagency agreements. Census officials can verify the availability of advanced funds through the Treasury Account Symbol (TAS); however, TAS is not electronically captured in CBS. Unless CBS is updated to include a mechanism for tracking the availability of funds, the performing agency cannot ensure that funds are legally available when it bills against the advances. This is an indication of a potential internal control weakness over resources and, as mentioned above, creates a risk that customer agencies may incur an Anti-Deficiency Act violation. According to service providers and customers, the rate-setting processes for the Commerce Departmental WCF are transparent, clearly coordinated, and designed to recover annual actual costs. For example, the meetings of the Commerce CFO Council—which is comprised of CFOs from each of the Commerce Department’s bureaus—provide a regular source of information on the status of funding recommendations to Commerce’s CFO. In addition, the algorithm review group and OEB review rates charged at least annually to determine how much each customer bureau will pay into the Commerce Departmental WCF. These rates are based on algorithms that include variables such as prior year actual costs associated with customers’ service usage. For example, when determining rates for building maintenance, one important variable is the square footage of the customer office space. Similarly, when determining rates for human resource management services, the number of full-time equivalents is an important variable. This rate-setting process, including the method for setting and distributing charges among users, is clearly explained in the Commerce Departmental WCF handbook. Moreover, Commerce customers said they understood how rates were determined and were satisfied with the amount of input they had in the process. Managers of the Commerce Departmental WCF said that their goal each year is to set rates that cover annual WCF costs and maintain at least the same level of services as the prior year. OEB officials said that the WCF has a limited number of significant cost increases, so a large carryover balance is not needed to sustain the fund. The Census WCF is also designed to recover actual costs and bases its M&A service rates on algorithms linked to expected service usage. In contrast to the Commerce Departmental WCF, managers of the Census WCF maintain an operating reserve to help keep rates stable throughout the decennial census cycle. The Census WCF charges rates that are higher than needed earlier in the decennial cycle to break even later in the decennial cycle. Information about how rates are charged and costs distributed is incomplete and dispersed across three documents. Census customers we spoke with had mixed responses about how M&A costs are determined. For example, five of the seven customers we spoke with said Census informed them of the charges in general terms but did not describe how the individual costs that make up the total M&A costs are determined. However, the customers provided no detail about their efforts to obtain such information from Census. Nonetheless, the lack of clarity in how M&A costs are determined makes it difficult for customers to challenge rates or suggest improvements. A recent Census task force report on cost-saving opportunities related to survey work performed for other federal agencies found that no one Census division had authority to oversee and manage the allocation of resources or the timing of delivering services to any one customer. This report recommended that Census provide greater details on survey costs and establish a “single point of authority” for communicating with customers. Both Commerce and Census WCFs have a management review process that examines how rates are set. However, the level of transparency differs between the two organizations. For example, each month OEB creates and updates a “status of funds” document that tracks available funding in the Commerce Departmental WCF throughout the fiscal year. Commerce officials said OEB uses this document to regularly monitor the WCF and the document is shared with the Commerce CFO each month. In addition, OEB reconciles actual obligations with estimates to identify and investigate variances of 10 percent or more. The service providers meet quarterly with OEB to review budget status and any changes in customer service needs. This process is also documented in the Commerce WCF handbook and helps ensure that rates recover the agency’s actual costs. In contrast, Census’ WCF reconciliation and review process lacks transparency. Census provides a fragmented and limited description of how it sets rates and there is no formal process to communicate with customers. According to Census officials, the rates for the M&A services are reviewed annually and the costs of the survey services are reconciled when a project concludes. However, two of the three Census WCF internal customers said they have limited discussions and input with Census WCF managers about how rates were determined. Moreover, it is unclear what information is provided, or when, to senior Census management (e.g., Census’ CFO). Documentation provided by Census officials did not show what assumptions were used to set rates, whether they were applied consistently, and if actual costs are fully recovered. Although the Census WCF is subject to periodic reviews conducted by the Budget Division to compare revenues generated with the costs captured, it does not include further details on how this is done or with whom the information is shared. Census officials said the WCF is discussed during quarterly budget review meetings with senior management. However, the document that Census officials shared with us, which is used to explain the components of the WCF balance, includes amounts only related to the Cost Allocation component. Without transparent processes for reviewing and updating the service rates, Census misses the opportunity to assure customers and other stakeholders that rates charged are set fairly and to receive suggestions from stakeholders on potential improvements. OEB has processes in place that help it manage the operations of the Commerce Departmental WCF. For example: A “status of funds” report is updated monthly and provided to Commerce’s CFO. This report helps WCF managers track the remaining balance of customers’ funds to pay for WCF services. Variances of 10 percent or more in the Commerce Departmental WCF’s estimated and actual obligations are investigated to obtain justification. In addition, OEB meets quarterly with the director of each office to review the current status of the organization’s budget. Customers are surveyed annually about the quality of OEB’s assistance and written guidance for the services OEB provides. However, this survey asks broad questions that are not targeted to a specific activity or level of performance. For example, the only references to the Commerce Departmental WCF are general questions about customer interactions with OEB staff and whether the Commerce WCF handbook is useful. While OEB finds these processes helpful in day-to-day management of WCF activities—such as tracking available balances of customer funds— it does not define these processes as measures to assess WCF performance. We believe that Commerce could use these processes as a starting point to determine what specific measures would be helpful to continuously improve WCF management. As part of its strategic plan, Commerce outlined departmentwide strategic goals and performance measures in its “balanced scorecard approach.” The offices that provide services supported through the Commerce Departmental and Census WCFs are assessed as part of this approach, but currently this does not include measures to assess how the WCFs are operating or if they could each function better as an entity. Customers corroborated receiving surveys from other service providers, such as the Acquisition and Finance Divisions, but were unable to provide copies of these surveys. participating divisions. Further, Census WCF managers could not provide any examples of fund-specific performance measures. Although the Commerce Departmental and Census WCFs are intended to achieve economies of scale by supporting services and projects that are performed more advantageously when centralized, both WCFs support similar M&A services that could potentially be supported by one WCF. Officials at Commerce and Census were unable to clearly explain why each WCF provides the same or similar services, or why these services could not be consolidated. For example, both the Commerce Departmental and Census WCFs support a range of space management, travel, and training services for staff, as well as other personnel-related activities. These are potential areas that could be consolidated. In addition, by establishing WCF-specific performance measures, fund managers could benchmark or compare fund performance, which would be useful in identifying improvement opportunities and deciding whether or not to consolidate services. In general, customers we interviewed said they had regular and ongoing interactions with fund managers or service providers. Commerce Departmental customers said they communicate regularly about the type and amount of services received and rates charged. Census customer concerns about overhead costs initiated the recent Census task force report previously described. As a result of the report findings, Census made several rate changes. Also in response to customer input, the Census Bureau recently decided to close six field offices. The Commerce CFO Council actively seeks WCF managers’ involvement in setting customer priorities and addressing customer needs. The council meets to discuss individual bureau requests and recommends final allocations to Commerce’s CFO, including identifying any potential need to shift funds across programs. When prioritizing customer demand, Commerce Departmental WCF managers also have to incorporate the statutory cap that limits the amount NOAA pays into the Commerce Departmental WCF. Because fund managers strive to ensure self- sufficiency of the WCF and equitably distribute costs across customers, this cap limits the amount other bureaus pay into the Commerce Departmental WCF and thus the level of services that can be supported for all customers. As a result, Commerce Departmental WCF managers in the past have had to propose reductions to services to compensate for the NOAA cap and still provide needed services. When the needs of customers exceed the capacity of the Commerce Departmental WCF, the department and the customer enter into a memorandum of understanding (MOU), outside of the standard suite of services offered through the WCF. However, this additional process somewhat works against the efficiencies that WCFs are intended to provide as WCF managers must rely on a separate mechanism to provide the same type, but a higher level, of service to customers. Although the Commerce Departmental WCF carries over some balances, the NOAA cap’s effective limit on revenues hinders the ability to build a reserve. During fiscal years 2001 through 2010, the Commerce Departmental WCF carryover balance ranged from $3 million to $13 million. Census uses its operating reserve to maintain price stability for customers throughout the decennial cycle. During fiscal years 2001 through 2010, the Census WCF carryover balance ranged from $21 million to $430 million as reported in the President’s budget. According to Census officials, the operating reserve is a portion of the Census WCF carryover balance. In fiscal year 2010, they estimated the amount of the operating reserve ranged from $45 million to $75 million. However, they did not provide documentation to support this range. In certain cases, Census also provides separate services to customers outside the WCF’s standard offerings. Census’ process to meet changes in customer demand is designed to address the fluctuating costs of providing services to internal customers during the decennial cycle while equitably distributing costs among all internal customers. For example, in the peak years of the decennial cycle, the decennial program requires such an increased level of M&A services to support temporary staff that not all of those costs can be supported by the WCF reserve without undermining the goal of equitable cost distribution among customers. Therefore, Census directly bills these additional costs to the decennial program. WCFs provide agencies with an opportunity to operate more efficiently by consolidating services and creating incentives for customers and managers to exercise cost control and economic restraint. Given the fiscal pressures facing the federal government, consolidating operations could potentially achieve cost savings and help agencies provide more efficient and effective services. Agencies can maximize the potential of these opportunities by following four key WCF operating principles. Incorporating these principles in written guidance could promote consistent application, provide a baseline for officials to assess and improve management processes, and serve as an information tool for customers, program managers, stakeholders, and reviewers. Clear guidance on the roles and responsibilities of key personnel for managing the WCF promotes understanding of who will be held accountable, helps ensure that related responsibilities are coordinated, and reduces the risk that funds will be mismanaged. While the roles and responsibilities of the Commerce Departmental WCF’s management are well understood by customers, the guidance does not include complete information about all key participants. Because Census WCF guidance is fragmented and incomplete, it lacks clarity and is of limited use for employees and customers. Additionally, Census does not have a process to facilitate coordination among key WCF personnel to ensure appropriate tracking of funds. To appropriately manage the use of funds advanced from customers for projects spanning multiple fiscal years, performing agencies need a way to track whether funds advanced remain available to bill against. Both Commerce and Census use the Commerce Business System (CBS) to manage funds, but the system does not track a key element to confirm that funds advanced in support of an interagency agreement are available to cover the costs of performance. Modifying CBS would help ensure that customer funds are legally available and avoid potential Anti-Deficiency Act violations for the customer agencies. A transparent rate-setting process helps assure that customers are being charged accurately and fairly for services supported through the WCF. Commerce clearly explains its rate-setting process and customers feel they have sufficient input on the process. Census’ rate-setting process is less transparent, which limits the ability of fund managers to confirm that the WCF is self-sufficient and makes it difficult for customers to make appeals. WCF managers can better foster a results-oriented environment focused on continuous improvement by establishing performance measures and goals for WCF operations, ensuring those performance measures and goals align with the agency’s strategic goals, and by establishing a management review process to track WCF performance. The purpose of the WCFs is to achieve economies of scale through shared services. However, the lack of performance measures makes it difficult to know whether these economies are being achieved. Moreover, WCF-specific performance information and a corresponding management review process could be used to hold fund managers accountable for achieving the efficiencies that WCFs were designed to produce. Furthermore, the two WCFs may provide services in overlapping areas, which warrants further examination. We make seven recommendations to the Secretary of Commerce. To improve the management of the Commerce Departmental Working Capital Fund, we recommend that the Secretary of Commerce take the following actions: 1. Update the Commerce Departmental WCF handbook to include a description of the Commerce CFO Council and its roles and responsibilities. 2. To meet its responsibilities in ensuring the proper use of federal funds and to help guard against the use of canceled appropriations, revise its financial systems to electronically record and monitor the period of availability of appropriations advanced to Commerce and its bureaus from client agencies. 3. Establish performance measures to assess performance of WCF operations, such as billing error rates, and determine what additional measures would be helpful to improve WCF management. 4. Coordinate with the Census Bureau to examine the M&A services provided through both WCFs to determine what services might be consolidated. To improve the management of the Census Bureau Working Capital Fund, we recommend that the Secretary of Commerce require the Under Secretary for Economic Affairs as well as the Census Director to take the following actions: 5. Develop guidance that clarifies and consolidates existing WCF policies to include: a. roles and responsibilities of key personnel responsible for WCF management, and b. a process to coordinate information managed by disparate divisions to provide an overarching view of the WCF and ensure the appropriate tracking of funds. 6. Include a more detailed explanation in WCF guidance on the rate- setting process for all components of the fund, such as an explanation of how rates are determined and costs distributed, and establish a formal process similar to the Departmental WCF’s process to communicate with customers. 7. Establish performance measures to assess performance of WCF operations and determine what would be helpful to improve WCF management. We provided a draft of this report to the Secretary of the Department of Commerce for official review and comment. In his letter that is reprinted in appendix II, the Secretary agreed with our findings and recommendations and has directed the managers of both the Commerce Departmental WCF and the Census Bureau WCF to begin implementing our recommendations. Commerce and Census provided technical comments that were incorporated into the report as appropriate. We also provided portions of the report to the customer agencies with which we met. None of these customer agencies offered any technical comments. We are sending copies of this report to the Secretary of Commerce, the Under Secretary for Economic Affairs, and the Director of the Census Bureau. We are also sending copies to the appropriate congressional Committees. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Denise M. Fantone at (202) 512-6806 or fantoned@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Principle Clearly delineate roles and responsibilities Appropriate delineation of roles and responsibilities promotes a clear understanding of who will be held accountable for specific tasks or duties, such as authorizing and reviewing transactions, implementing controls over WCF management, and helping ensure that related responsibilities are coordinated. In addition, this reduces the risk of mismanaged funds and tasks or functions “falling through the cracks.” Moreover, it helps customers know who to contact in the event they have questions. Written roles and responsibilities specify how key duties and responsibilities are divided across multiple individuals/offices and are subject to a process of checks and balances. This should include separating responsibilities for authorizing transactions, processing and recording them, and reviewing the transactions. Written description of all WCF roles and responsibilities in an accessible format such as a fund manual. Discussions with providers and clients confirm a clear understanding. A routine review process exists to ensure proper execution of transactions and events. Ensure self-sufficiency by recovering the agency’s actual costs Transparent and equitable pricing methodologies allow agencies to ensure that rates charged recover agencies’ actual costs and reflect customers’ service usage. If customers understand how rates are determined or changed including the assumptions used, customers can better anticipate potential changes to those assumptions, identify their effect on costs, and incorporate that information into budget plans. A management review process can help to ensure the methodology is applied consistently over time and provides a forum to inform customers of decisions and discuss as needed. Published price sheets for services are readily available. Documentation of pricing formulas supports equitable distribution of costs. Pricing methodology and accompanying process ensures that, in aggregate, charges recover the actual costs of operations. Management review process allows fund managers to receive and incorporate feedback from customers. Discussions with customers confirm an understanding of the charges and that they are viewed as transparent and equitable. Principle Measure performance Performance goals and measures are important management tools applicable to all operations of an agency, including the program, project, or activity level. Performance measures and goals could include targets that assess fund managers’ responsiveness to customer inquiries, the consistency in the application of the funds’ rate-setting methodology, the reliability of cost information, and the billing error rates. Performance measures that are aligned with strategic goals can be used to evaluate whether, and if so how, WCF activities are contributing to the achievement of agency goals. A management review process comparing expected to actual performance allows agencies to review progress towards goals and potentially identify ways to improve performance. Performance indicators and metrics for WCF management (not just for the services provided) are documented. Indicators or metrics to measure outputs and outcomes are aligned with strategic goals and WCF priorities. WCF managers regularly compare actual performance with planned or expected results and make improvements as appropriate. In addition, performance results are periodically benchmarked against standards or “best in class” in a specific activity. Build in flexibility to obtain customer input and meet customer needs Opportunities for customers to provide input about WCF services, or voice concerns about needs, in a timely manner enable agencies to regularly assess whether customer needs are being met or have changed. This also enables agencies to prioritize customer demands and use resources most effectively, enabling them to adjust WCF capacity up or down as business rises or falls. Established forum, routine meetings, and/or surveys solicit information on customer needs and satisfaction with WCF performance. Established communication channels regularly and actively seek information on changes in customer demand and assess the resources needed to accommodate those changes. Established management review process that allows for trade-off decisions to prioritize and shift limited resources needed to accommodate changes in demand across the organization. In addition to the contact named above, Carol M. Henn, Assistant Director and Leah Q. Nash, Analyst-in-Charge managed this assignment. Anna Chung, Elisabeth Crichton, Wati Kadzai, Margit Myers, and Amrita Sen made major contributions to this report. Tom Beall, Robert Gebhart, Felicia Lopez, and Jack Warner also made key contributions to this report.
Agencies can improve their efficiency through the use of shared services, which are often financed through intragovernmental revolving funds (IRF). GAO was asked to (1) identify key operating principles the Commerce Departmental and Census Bureau Working Capital Funds (WCF), which are one type of IRF, should follow to ensure appropriate tracking and use of federal funds and (2) evaluate how departmental and Census policies and procedures for managing these WCFs reflect these principles. GAO identified four key operating principles based on a review of governmentwide guidance on business principles, internal controls, managerial cost accounting, and performance management. GAO also discussed the reasonableness of the principles with staff of the two WCFs and the Office of Management and Budget; these staff generally found the principles to be reasonable. GAO reviewed WCF authorizing legislation and statutory authorities, analyzed agency policies and data, and interviewed agency officials. Four key operating principles offer a framework for effective WCF management: (1) Clearly delineate roles and responsibilities; (2) Ensure self-sufficiency by recovering the agency's actual costs; (3) Measure performance; (4) Build in flexibility to obtain customer input and meet customer needs Commerce and Census guidance do not identify the roles and responsibilities of all key WCF personnel. While all involved had a clear informal understanding of who is responsible for managing the Departmental WCF, Commerce's guidance does not discuss its CFO Council--an entity with an important role related to WCF increases and changes. Census lacks a process to coordinate and consolidate information managed by disparate divisions and ensure appropriate tracking of funds. There are also opportunities for the agencies to achieve greater management efficiencies by consolidating certain WCF services. Commerce has a transparent process to ensure recovery of actual costs. However, Census' process could be more transparent. The Commerce Departmental WCF's rate setting and review processes are clearly described, coordinated, and designed to recover actual annual costs. Entities such as the Commerce CFO Council and algorithm review group help to facilitate shared understanding among fund managers, customers, and service providers. Census has a fragmented and limited description of its processes and lacks a formal process to communicate with customers. Census customers GAO spoke with had a mixed understanding about how certain WCF costs are determined, limiting their ability to make appeals and suggest improvements. Both WCFs could benefit from performance measures that assess operational effectiveness. Commerce and Census have identified performance measures related to organizational strategic goals. However, neither has established WCF operational performance measures such as responsiveness to customer inquiries and billing error rates. Moreover, both WCFs support similar management and administrative services that could potentially be consolidated. Both WCFs obtain customer input and have flexibility to adjust to customer needs, but challenges exist. In general, customers GAO interviewed said they had regular and ongoing discussions with fund managers or service providers. At Commerce, its CFO Council and WCF managers periodically assess and shift resources to address changes in customer needs and prioritize requests for services. However, the statutory cap on one bureau's payments into the WCF limits the level of services that can be provided to all Commerce bureaus. To provide services beyond the capacity of the WCF, Commerce enters into a memorandum of understanding with specific customers. The Census WCF's ability to build and maintain an operating reserve helps to provide price stability for customers throughout the decennial census cycle when the costs of management and administrative services supported through the WCF fluctuate dramatically. Similar to Commerce, Census has the flexibility to provide additional services by billing customers directly. GAO is making seven recommendations to improve the management of the two WCFs, including updating and consolidating WCF guidance, establishing a process to measure WCF performance, and examining opportunities to consolidate certain WCF services. The Commerce Secretary agreed with all of our findings and recommendations and has directed managers of both the departmental WCF and the Census WCF to begin implementing GAO's recommendations.
There are four major steps in the contract-level RADV audit process as reported by CMS: MA contract selection. CMS selects 30 MA organization contracts for contract-level RADV audits, which agency officials stated provides a sufficient representation of contracts (about 5 percent) without imposing unreasonable costs on the agency. An MA organization may have more than one contract selected for a contract-level RADV audit. CMS selects contracts based on diagnosis coding intensity, which the agency defines for each contract as the average change in the risk score component specifically associated with the reported diagnoses for the beneficiaries covered by the contract. That is, increases in coding intensity measure the extent to which the estimated medical needs of the beneficiaries in a contract increase from year to year; thus, contracts whose beneficiaries appear to be getting “sicker” at a relatively rapid rate, based on the information submitted to CMS, will have relatively high coding intensity scores. Contracts with the highest increases in coding intensity are those with beneficiaries whose reported diagnoses increased in severity at the fastest rates. CMS officials stated that the agency adopted this selection methodology to (1) focus the contract-level RADV audits on MA organization contracts that might be more likely to have submitted diagnoses that are not supported by the medical records and (2) provide additional oversight of contracts with the most aggressive coding. To be eligible for a contract-level audit, MA contracts must have had at least three pair-years of data that can be used to distinguish a change in disease risk scores from one year to the next; that is, the contract must have been in place for at least 4 years of continuous payment activity plus the audit year. For each pair year, CMS’s coding intensity calculation excludes beneficiaries not enrolled in the same contract or not eligible for Medicare in consecutive years. CMS ranks contracts by coding intensity and divides them into three categories: high, medium, and low. CMS then randomly selects contracts for audit: 20 from the high category, 5 from the medium category, and 5 from the low category. According to CMS officials, this strategy ensures contracts with the highest coding intensity— considered high risk for improper payments by CMS—have a higher probability for audit while keeping all contracts at risk for review. MA beneficiary sampling. After CMS selects 30 MA contracts to audit, the agency selects the beneficiaries whose medical records will be the focus of review. Up to 201 beneficiaries are chosen from each contract based on the individuals’ risk scores using a stratified random sample: 67 beneficiaries from each of the three risk score groups (highest one-third of risk scores, the middle one-third, and the lowest third). Medical record collection and review. After selecting beneficiaries for review, CMS requests supporting medical record documentation for all diagnoses submitted to adjust risk in the payment year. The MA organization may submit up to five medical records per audited diagnosis. CMS contractors review the submitted medical records to determine if the records support the diagnoses submitted by the MA organizations. If the initial reviewer determines that a diagnosis is not supported, a second reviewer reviews the case. Payment error calculation and extrapolation. When medical record review is completed, CMS extrapolates a payment error rate to the entire contract beginning with contract-level audits of 2011 payments. Each beneficiary’s payment error is multiplied by a sampling weight and the number of months the beneficiary was enrolled in the MA contract during the payment year. After these beneficiary-level payment errors are summed, the amount CMS will seek to recover will be reduced by (1) using the lower limit of a 99 percent confidence interval based on the sample and (2) reducing the recovery amount by a FFS adjuster amount that estimates payment errors that would have likely occurred in FFS claims data. Once the recovery amount is finalized, CMS releases contract-level RADV audit finding reports to each audited MA organization, which may dispute the results of medical record review or appeal the audit findings. Beginning with the contract-level RADV audits of 2011 payments, CMS will collect extrapolated overpayments from MA organizations once all appeals are final. Recovery auditors have been used in various industries, including health care, to identify and collect overpayments. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 directed CMS to test the use of RACs to identify overpayments and underpayments through a postpayment review of FFS medical claims and recoup overpayments. The Tax Relief and Health Care Act of 2006 required CMS to implement a permanent national recovery audit contractor program by January 1, 2010 and to compensate RACs using a contingency fee structure under which the RACs are paid from recovered overpayments. The Patient Protection and Affordable Care Act expanded the recovery audit program initiated in Medicare FFS to MA plans under Part C, among other things. In future contract-level RADV audits, CMS also will review diagnoses submitted through MA encounter data. While CMS previously collected diagnoses from MA organizations, in 2012 the agency also began collecting encounter data from MA organizations similar to that submitted on FFS claims. CMS requires MA organizations to submit, via the Encounter Data System, encounter data weekly, biweekly, or monthly depending on their number of enrollees. Encounter data include diagnosis and treatment information recorded by providers for all medical services and may either originate from claims that providers submit to MA organizations for payment or from MA organizations’ medical record review. CMS started including the diagnosis information from MA encounter data from 2014 dates of service when calculating 2015 enrollee risk scores. While coding intensity scores can be helpful in assessing the likelihood of improper payments for MA contracts, results from the CMS contract-level RADV audits of 2007 payments indicate that the coding intensity scores CMS calculated were not strongly correlated with the percentage of unsupported diagnoses within a contract. The fact that this correlation is not strong reduces the likelihood that contracts selected for audit would be those most likely to yield large amounts of improper payments and hampers CMS’s goal of using the audits to recover improper payments. In addition, internal control standards for federal agencies state that agencies should use and communicate quality information in achieving program goals. Figure 1 shows, for example, that CMS reported that the percentage of unsupported diagnoses (36.0 percent) among the high coding intensity contracts it audited was nearly identical to the percentage of unsupported diagnoses (35.7 percent) among the medium coding intensity contracts audited. In addition, 7 contracts in the high coding intensity group had unsupported diagnosis rates below 30 percent, including the contract with the highest coding intensity score. Several shortcomings in CMS’s methods for calculating coding intensity could have weakened the correlation between the degree of coding intensity and the percentage of improper payments. These shortcomings and their potential effects are as follows. CMS’s coding intensity calculation may be based on noncomparable coding intensity scores across contracts because (1) the years of data used for each contract may not be the same and (2) coding intensity scores are not standardized to control for year-to-year differences. First, although CMS officials stated that the agency requires at least three pair- years of data for each contract, the agency includes data from all available years for each contract, which may vary between contracts. Because the growth in risk scores was lower in the MA program in earlier years among beneficiaries that continuously enrolled in the program, CMS’s inconsistent standard of years measured for each contract would tend to calculate higher coding intensity scores for contracts that entered the MA market during periods of higher risk score growth. Among beneficiaries who enrolled in MA in consecutive years, the growth in average risk scores was 0.106 from 2004 through 2006, 0.119 from 2006 through 2010, and 0.132 from 2010 through 2013. Second, CMS officials stated that the agency does not standardize its coding intensity data relative to a measure of central tendency. Because CMS’s coding intensity calculation does not account for the expected increase in risk scores during each period of growth, changes in risk scores may be more volatile from year to year than they would likely be if standardized or indexed to a measure of central tendency. CMS’s coding intensity calculation does not distinguish between the diagnoses that were likely coded by providers and the diagnoses that were likely revised by MA organizations. MA organizations may receive diagnoses from providers that are related to services rendered to MA beneficiaries. Because these diagnoses are submitted by providers, the medical records they create may be more likely to support these diagnoses compared with diagnoses that are subsequently coded by the MA organization through medical record chart reviews. For future years, CMS has an available method to distinguish between diagnoses likely submitted by providers to MA organizations and diagnoses that were likely later added by MA organizations. CMS’s Encounter Data System provides a way for MA organizations to designate supplemental diagnoses that the organization added or revised after conducting medical record review. CMS has not outlined plans for incorporating encounter data into its contract selection methodology, even though the encounter data could help target the submitted diagnoses that may be most likely related to improper payments in the future. CMS follows contracts that are renewed or consolidated under a different existing contract within the same MA organization; however, the agency’s coding intensity calculation does not include the prior risk scores of the prior contract in the MA organization’s renewed contract. This may result in overestimated improper payment risk if MA organizations move beneficiaries with higher risk scores—such as those with special needs— into one consolidated contract. CMS’s contract selection methodology did not (1) always target contracts with the highest coding intensity scores, (2) use results from prior contract-level RADV audits, (3) account for contract consolidation, and (4) account for contracts with high enrollment. These shortcomings are impediments to CMS’s goal of recovering improper payments and are counter to federal internal control standards, which require that agencies use quality information to achieve their program goals. For the 2011 contract-level RADV audits, CMS used a contract selection methodology that did not focus on contracts with the highest coding intensity scores. While we found that coding intensity scores are not strongly correlated with diagnostic discrepancies, they are somewhat correlated. CMS failed to fully consider that correlation for the 2011 contract-level RADV audit. For that audit, CMS officials stated that 20 of the 30 contracts were chosen because they were among the top third of all contracts in coding intensity, but we found that many of the 20 contracts were not at the highest risk for improper payments according to CMS’s estimate of coding intensity. Only 4 of the 20 contracts ranked among the highest 10 percent in coding intensity, while 8 of the 20 contracts ranked below the 75th percentile in the coding intensity distribution (see fig. 2). In addition, CMS chose 5 of the 30 contracts because they were among the bottom third of all contracts in coding intensity, even though CMS’s contract-level RADV audits of 2007 payments found that all contracts in the lowest third of the agency’s coding intensity calculation had a below-average percentage of unsupported diagnoses. CMS officials stated that the RADV contract selection methodology includes these contracts to show that all contracts are at risk of being audited. However, officials also stated that MA organizations are not informed of their contracts’ coding intensity relative to all other MA contracts; thus, MA organizations cannot be certain their contracts will not be audited even if CMS announced it will no longer audit low coding intensity contracts. According to agency officials, CMS’s 2011 contract-level RADV contract selection methodology also did not consider results from the agency’s prior RADV audits, potentially overlooking information indicating contracts with known improper payment risk. Thus, contracts with the highest rates of unsupported diagnoses in the 2007 contract-level RADV audits were not among those selected for 2011 contract-level RADV audits. While CMS selected 6 contracts for 2011 that also underwent 2007 contract- level RADV audits, only 1 of these contracts was among the 10 with the highest rates of unsupported diagnoses in 2007. For the 2011 contract- level RADV audits, CMS officials stated that the agency selected 6 MA contracts because the HHS Office of Inspector General had conducted audits of 2007 payments on those contracts, but CMS did not know the rates of unsupported diagnoses for those contracts and did not determine which of them were at high risk of improper payments. By not considering results from prior contract-level RADV audits, CMS’s contract selection methodology also did not account for contract consolidation. An MA organization may have more than one contract in a service area; further, it may no longer have a contract that underwent a prior RADV audit but continue to operate another contract within the same service area. For example, the contract with the highest rate of unsupported diagnoses in the 2007 contract-level RADV audit is no longer in place, but the MA organization continues to operate a different contract that includes the service area from its prior contract. Thus, without considering all of an MA organization’s contracts in that service area, CMS cannot audit the beneficiaries affiliated with the highest percentage of unsupported diagnoses in 2007. Although the potential dollar amount of improper payments to MA organizations with high rates of unsupported diagnoses is likely greater when contract enrollment is large, CMS officials stated that the 2011 contract-level RADV contract selection methodology did not account for contracts with high enrollment. In 2011, the median enrollment among MA contracts was about 5,000, while enrollment at the 90th percentile was nearly 45,000. Some MA contracts with large enrollment had high rates of unsupported diagnoses under prior contract-level RADV audits. For example, 5 of the 10 MA contracts with the highest rates of unsupported diagnoses for the 2007 contract-level RADV audits had 2011 enrollment above the 90th percentile. CMS officials reported that current contract-level RADV audits have been ongoing for several years, including the appeals associated with the 2007 contract-level RADV audits. (See fig. 3.) For audits of 2007 payments, CMS notified MA organizations in November 2008 that their contracts would be audited but did not complete medical record review until approximately 4-1/2 years later in March 2013. Similarly, 2011 contract- level RADV audits had not been completed as of August 2015. CMS notified MA organizations of contract audit selection in November 2013 but did not begin medical record review for these contracts until May 2015. CMS officials said the agency will start collecting payments from the 2011 contract-level RADV audits in fiscal year 2016. As the agency is in the medical record review phase, appeals have not yet started. This slow progress in completing audits is contrary to CMS’s goal to conduct contract-level RADV audits on an annual basis and slows its recovery of improper payments. In addition, CMS lacks a timetable that would help the agency to complete these contract-level audits on an annual cycle. In contrast, the national RADV audit that calculates the national improper payment estimate uses a timetable, but this is not applied to the contract-level audits. The national RADV audits that CMS annually conducts to estimate the national MA improper payment rate under IPIA provide the agency with a possible timetable for completing annual contract-level RADV audits. CMS has not followed established project management principles in this regard, which call for developing an overall plan to meet strategic goals and to complete projects in a timely manner. In addition to the lack of a timetable, other factors have lengthened the time frame of the contract-level audit process. First, CMS’s sequential notification to MA organizations—first identifying which contracts had been selected for audit and then later identifying which beneficiaries under these contracts would be audited—hinders the agency’s goal of conducting annual contract-level audits because it creates a time gap. For example, for the 2011 contract-level audits, CMS officials stated that the agency notified MA organizations about the beneficiaries whose diagnoses would be audited 3 months after notifying these same MA organizations about which contracts had been selected for audit. Both the selection of contracts and beneficiaries currently require risk score and beneficiary enrollment data. Second, ongoing performance issues with the web-based system CMS uses to receive medical records submitted by MA organizations for contract-level RADV audits caused CMS to substantially lengthen the time frame for MA organizations to submit these medical records for the 2011 contract-level RADV audits. According to CMS officials, for the 2007 contract-level RADV audits, MA organizations submitted medical records for 98 percent of all audited diagnoses within a 16-week time frame. However, system performance issues with the Central Data Abstraction Tool (CDAT)—CMS’s web-based system for transferring and receiving contract-level RADV audit data—led CMS to more than triple the medical record submission time frame for the 2011 contract-level RADV audits to over 1 year. Officials from AHIP and the two MA organizations we interviewed indicated that CDAT often proved inoperable, with significant delays and errors in uploading files. CMS officials stated that the agency suspended the use of CDAT for 8 months and implemented steps to monitor and test CDAT’s performance. CMS officials stated that they have implemented steps to continue monitoring and testing CDAT’s performance. However, officials from MA organizations stated that CDAT continued to experience significant delays in uploading files after CMS reopened CDAT for use. Officials of one MA organization suspected that the system may have been overwhelmed because CMS increased the number of medical records allowed per audited diagnosis from one to five between the 2007 and 2011 contract-level audits. For future medical record submissions, CMS officials subsequently told us that they plan to use a 20-week submission period and did not indicate to us any plans for an additional medical record submission method if CDAT’s problems persisted. CMS’s Medicare FFS program has increasingly used the Electronic Submission of Medical Documentation System (ESMD) to transfer medical records reliably from providers to Medicare contractors since 2011. Both ESMD and CDAT allow for the electronic submission of medical records by securely uploading and submitting medical record documentation in a portable document format file. CMS officials stated that the agency did not use ESMD to transfer medical records primarily because it could not also be used for medical record review like CDAT. However, medical records could be reviewed without being transferred through CDAT. The transfer of medical records has been the main source of delay in completing CMS’s contract-level audits of 2011 payments, and CMS has not assessed the feasibility of updating ESMD for transferring medical records in contract-level RADV audits. While ESMD was not available when CMS began its 2007 contract-level RADV audits, the system has demonstrated a greater capacity for transferring medical records than CDAT. In fiscal year 2014, providers used ESMD to transfer nearly 500,000 medical records—far beyond the capacity necessary for contract-level RADV audits. In interviews, officials of two FFS RACs stated that ESMD was very reliable and did not have technical issues that affected audits. In addition, CMS has not applied time limits to contract-level RADV reviewers for completing medical record reviews. These reviews took 3 years for the 2007 contract-level RADV audits. In contrast, CMS generally requires its Medicare Administrative Contractors (MAC)—a type of FFS contractor—to make postpayment audit determinations within 60 days of receiving medical record documentation. Because CMS has not required that contract-level RADV auditors complete medical record reviews within a specific time period, the agency is hindering its ability to reach its goal of conducting annual contract-level RADV audits. Disputes and appeals stemming from the 2007 contract-level RADV audit findings have been ongoing for years and the lack of time frames at the first level of the appeal process hinders CMS from achieving its goal of using contract-level audits to recoup improper payments. Nearly all MA organizations whose contracts were included in the 2007 contract-level RADV audit cycle disputed at least one diagnosis finding following medical record review, and five MA organizations disputed all the findings of unsupported diagnoses. CMS officials stated that MA organizations in total disputed 624 (4.3 percent) of the 14,388 audited diagnoses, and that the determinations on these disputes, which were submitted starting March 2013 through May 2013, were not complete until July 2014. If an MA organization disagrees with the medical record dispute determination, the MA organization may appeal to a hearing officer. This appeal level is called review by a CMS hearing officer. Because the medical record dispute process for the 2007 contract-level RADV audit cycle took nearly 1-1/2 years to complete, CMS officials stated that the agency did not receive all 2007 second-level appeal requests for hearing officer review until August 2014. CMS officials stated that the hearing officer adjudicated or received a withdrawal request from the MA organization for 377 of the 624 appeals (60 percent) from August 2014 through September 2015. Appeals for the 2011 contract-level RADV audit cycle have yet to begin, as CMS officials stated that the agency is currently in the process of reviewing medical records submitted by MA organizations for the 2011 contract-level RADV audits. CMS officials stated that the medical record dispute process for the 2011 contract-level RADV audit cycle will differ from the process used during the 2007 cycle in certain respects. In particular, for the 2011 RADV audit cycle, the medical record dispute process will be incorporated into the appeal process instead of being part of the audit process, as it was during the 2007 cycle. The new first-level appeal process, in which an MA organization can submit a written request for an independent reevaluation of the RADV audit decision, will be called the reconsideration stage. This change will allow MA organizations to request reconsideration of medical record review determinations simultaneously with the appeal of payment error calculations, rather than sequentially, as was the case during the 2007 contract-level RADV audit cycle. While such a change may be helpful, the new process does not establish time limits for when reconsideration decisions must be issued. In contrast, CMS generally imposes a 60-day time limit on MA organization decisions regarding beneficiary payment first-level appeals in MA. CMS measures the timeliness of decisions regarding MA beneficiary first-level appeals to assist the agency in assigning quality performance ratings and bonus payments to MA organizations. Similarly in Medicare FFS, officials generally must issue decisions within 60 days of receiving first-level appeal requests. CMS officials stated that due to the agency’s limited experience with the contract-level RADV audit process, time limits were not imposed at the reconsideration appeal level and that this issue may be revisited once CMS completes a full contract-level RADV audit cycle. The lack of explicit time frames for appeal decisions at the reconsideration level hinders CMS’s collection of improper payments as the agency cannot recover extrapolated overpayments until the MA organization exhausts all levels of appeal and is inconsistent with established project management principles. CMS has not expanded the RAC program to MA, as it was required to do by the end of 2010 by the Patient Protection and Affordable Care Act. CMS issued a request for industry comment regarding implementation of the MA RAC on December 27, 2010, seeking stakeholder input regarding potential ways improper payments could be identified in MA using RACs. CMS reported that it had received all stakeholder comments from this request by late February 2011. CMS issued a request for proposals for the MA RAC in July 2014. As defined by the Statement of Work in that request, the MA RAC would audit improper payments in the audit areas of Medicare secondary payer, end-stage renal disease, and hospice. In October 2014, CMS officials told us that the agency did not receive any proposals to conduct the work in those three audit areas and that CMS’s goal was to reissue the MA RAC solicitation in 2015. In November 2015, CMS officials told us that the agency is no longer considering Medicare secondary payer, end-stage renal disease, and hospice services as audit areas for the MA RAC. Instead, the officials told us that CMS was exploring whether and how an MA RAC could assist CMS with contract-level RADV audits. In December 2015, CMS issued a request for information seeking industry comment regarding how an MA RAC could be incorporated into CMS’s existing contract-level RADV audit framework. In the request document, CMS stated that it is seeking an MA RAC to help the agency expand the number of MA contracts subject to audit each year. In the request, CMS stated that its ultimate goal is to have all MA contracts subject to either a contract-level RADV audit or what it termed a condition-specific RADV audit for each payment year. Officials we interviewed from three of the current Medicare FFS RACs all acknowledged that their organizations had the capacity and willingness to conduct contract-level RADV audits. Despite its recent request for information, CMS does not have specific plans or a timetable for including RACs in the contract-level RADV audit process. Established project management principles call for developing an overall plan and monitoring framework to meet strategic goals. A plan and timetable would help guide CMS’s efforts in incorporating a RAC in MA and help hold the agency accountable for implementing this requirement from the Patient Protection and Affordable Care Act. Once the requirement is implemented, CMS could leverage the MA RAC in order to increase the number of MA organization contracts audited. CMS’s recovery of improper payments has been restricted because it has not established an MA RAC. For example, CMS currently plans to include 30 MA contracts in contract-level RADV audits for each payment year, about 5 percent of all contracts. Limitations in CMS’s processes for selecting contracts for audit, in the timeliness of CMS’s audit and appeal processes, and in the agency’s plans for using MA RACs to assist in identifying improper payments hinder the accomplishment of its contract-level RADV audit goals: to conduct annual contract-level audits and recover improper payments. These limitations are also inconsistent with federal internal control standards and established project management principles. Our analyses of these processes and plans suggest that CMS will likely recover a small portion of the billions of dollars in MA improper payments that occur every year. Shortcomings in CMS’s MA contract selection methodology may result in audits that are not focused on the contracts most likely to be disproportionately responsible for improper payments. Furthermore, CMS’s RADV time frames are so long that they may hamper the agency’s efforts to conduct audits annually, collect extrapolated payments efficiently, and use audit results to inform future RADV contract selection. By CMS’s own estimates, conducting annual contract-level audits would potentially allow CMS to recover hundreds of millions of dollars more in improper payments each year. Agency officials have expressed concerns about the intensive agency resources required to conduct contract-level RADV audits. To address the resource requirements of conducting contract-level audits, CMS intends to leverage the MA RACs for this purpose; however, the agency has not outlined how it plans to incorporate RACs into the contract-level RADV audits and is in the early stages of soliciting industry comment regarding how to do so. As CMS continues to implement and refine the contract-level RADV audit process, we recommend that the Administrator of CMS take actions in the following five key areas to improve the efficiency and effectiveness of reducing and recovering improper payments. First, to improve the accuracy of CMS’s calculation of coding intensity, the Administrator should modify that calculation by taking actions such as the following: including only the three most recent pair-years of risk score data for all contracts; standardizing the changes in disease risk scores to account for the expected increase in risk scores for all MA contracts; developing a method of accounting for diagnostic errors not coded by providers, such as requiring that diagnoses added by MA organizations be flagged as supplemental diagnoses in the agency’s Encounter Data System to separately calculate coding intensity scores related only to diagnoses that were added through MA organizations’ supplemental record review (that is, were not coded by providers); and including MA beneficiaries enrolled in contracts that were renewed from a different contract under the same MA organization during the pair-year period. Second, the Administrator should modify CMS’s selection of contracts for contract-level RADV audits to focus on those contracts most likely to have high rates of improper payments by taking actions such as the following: excluding contracts with low coding intensity scores; selecting more contracts with the highest coding intensity scores; selecting contracts with high rates of unsupported diagnoses in prior contract-level RADV audits; if a contract with a high rate of unsupported diagnoses is no longer in operation, selecting a contract under the same MA organization that includes the service area of the prior contract; and selecting some contracts with high enrollment that also have either high rates of unsupported diagnoses in prior contract-level RADV audits or high coding intensity scores. Third, the Administrator should enhance the timeliness of CMS’s contract- level RADV process by taking actions such as the following: closely aligning the time frames in CMS’s contract-level RADV audits with those of the national RADV audits the agency uses to estimate the MA improper payment rate; reducing the time between notifying MA organizations of contract audit selection and notifying them about the beneficiaries and diagnoses that will be audited; improving the reliability and performance of the agency’s process for transferring medical records from MA organizations, including assessing the feasibility of updating ESMD for use in transferring medical records in contract-level RADV audits; and requiring that CMS contract-level RADV auditors complete their medical record reviews within a specific number of days comparable to other medical record review time frames in the Medicare program. Fourth, the Administrator should improve the timeliness of CMS’s contract-level RADV appeal process by requiring that reconsideration decisions be rendered within a specified number of days comparable to other medical record review and first-level appeal time frames in the Medicare program. Fifth, the Administrator should ensure that CMS develops specific plans and a timetable for incorporating a RAC in the MA program as mandated by the Patient Protection and Affordable Care Act. We provided a draft of this report to HHS for comment. HHS provided written comments, which are printed in appendix I. HHS concurred with our recommendations. In its comment letter, HHS also reaffirmed its commitment to identifying and correcting improper payments in the MA program. HHS also provided technical comments, which we incorporated as appropriate. Based on HHS’s technical comments, we revised our suggested actions for how HHS could meet GAO’s first recommendation. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees and the Secretary of Health and Human Services. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. James Cosgrove, (202) 512-7114 or cosgrovej@gao.gov. In addition to the contact named above, individuals making key contributions to this report include Martin T. Gahart, Assistant Director; Luis Serna III; and Marisa Beatley. Elizabeth T. Morrison and Jennifer Whitworth also provided valuable assistance.
In 2014, Medicare paid about $160 billion to MA organizations to provide health care services for approximately 16 million beneficiaries. CMS, which administers Medicare, estimates that about 9.5 percent of its payments to MA organizations were improper, according to the most recent data—primarily stemming from unsupported diagnoses submitted by MA organizations. CMS currently uses RADV audits to recover improper payments in the MA program. GAO was asked to review the extent to which CMS is addressing improper payments in the MA program. This report examines the extent to which (1) CMS's contract selection methodology for RADV audits facilitates the recovery of improper payments, (2) CMS has completed RADV audits and appeals in a timely manner, and (3) CMS has made progress toward incorporating RACs into the MA program to identify and assist with improper payment recovery. In addition to reviewing research literature and agency documents, GAO analyzed data from ongoing RADV audits of 2007 and 2011 payments—CMS's two initial contract-level RADV audits. GAO also interviewed CMS officials. Medicare Advantage (MA) organizations contract with the Centers for Medicare & Medicaid Services (CMS) to offer beneficiaries a private plan alternative to the original program and are paid a predetermined monthly amount by Medicare for each enrolled beneficiary. These payments are risk adjusted to reflect each enrolled beneficiary's health status and projected spending for Medicare-covered services. CMS conducts risk adjustment data validation (RADV) audits of MA contracts which facilitate the recovery of improper payments from MA organizations that submitted beneficiary diagnoses for payment adjustment purposes that were unsupported by medical records. With a separate national audit, CMS estimated that it improperly paid $14.1 billion in 2013 to MA organizations, primarily because of these unsupported diagnoses. GAO found that CMS's methodology does not result in the selection of contracts for audit that have the greatest potential for recovery of improper payments. First, CMS's estimate of improper payment risk for each contract, which is based on the diagnoses reported for the beneficiaries in that contract, is not strongly correlated with unsupported diagnoses. Second, CMS does not use other available information to select the contracts at the highest risk of improper payments. As a result, 4 of the 30 contracts CMS selected for its RADV audit of 2011 payments were among the 10 percent of contracts estimated by CMS to be at the highest risk for improper payments. These limitations are impediments to CMS's goal of recovering improper payments and do not align with federal internal control standards, which require that agencies use quality information to achieve their program goals. CMS's goal of eventually conducting annual RADV audits is in jeopardy because its two RADV audits to date have experienced substantial delays in identifying and recovering improper payments. RADV audits of 2007 and 2011 payments have taken multiple years and are still ongoing for several reasons. First, CMS's RADV audits rely on a system for transferring medical records from MA organizations that has often been inoperable. Second, CMS audit procedures have lacked specified time requirements for completing medical record reviews and for other steps in the RADV audit process. In addition, CMS has not established timeframes for appeal decisions at the first-level of the MA appeal process, as it has done in other contexts. CMS has not expanded the recovery audit program to MA by the end of 2010, as it was required to do by the Patient Protection and Affordable Care Act. RACs have been used in other Medicare programs to recover improper payments for a contingency fee. In December 2015, CMS issued a request for information seeking industry comment on how an MA RAC could be incorporated into the RADV audit framework. CMS noted in its request that incorporating a RAC into the RADV framework would increase the number of MA contracts audited each year. CMS currently includes 30 MA contracts in each RADV audit, about 5 percent of all MA contracts. Despite the importance of increasing the number of contracts audited, CMS does not have specific plans or a timetable for incorporating RACs into the RADV audit framework, contrary to established project management principles, which stress the importance of developing an overall plan to meet strategic goals. GAO is making five recommendations to CMS to improve its processes for selecting contracts to include in the RADV audits, enhance the timeliness of the audits, and incorporate RACs into the RADV audits. HHS concurred with the recommendations.
Welfare reform legislation, the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA), eliminated the federal entitlement to cash assistance under the Aid to Families with Dependent Children (AFDC) program and replaced it with a program of block grants to states known as the Temporary Assistance for Needy Families (TANF) program. At the same time, Congress amended the Child Care and Development Block Grant Act of 1990, and required HHS to consolidate federal child care funds and administer them as a unified program. HHS named this program the Child Care and Development Fund. The intent of CCDF is to support state-administered child care programs for both families receiving public assistance and low-income working families not receiving public assistance. Since welfare reform, federal expenditures for CCDF have increased significantly from $2.1 billion in fiscal year 1996 to $5.3 billion in fiscal year 2000. In fiscal year 2002, about $4.8 billion was appropriated for CCDF. States also contributed to CCDF, and their funding for this program has nearly doubled from about $1.0 billion in fiscal year 1996 to $1.9 billion in fiscal year 2000. The average number of children who received subsidized child care each month also increased from about 1.2 million in fiscal year 1996 to 1.7 million in fiscal year 2000. States receive CCDF funds from potentially four funding streams. Each state’s annual federal allocation consists of separate discretionary, mandatory, and matching funds. A state does not have to obligate or spend any state funds to receive the discretionary and mandatory funds.However, to receive federal matching funds—and thus its full CCDF allocation—a state must maintain its program spending at a specified level, referred to as a state’s maintenance of effort, and spend additional state funds above that level. In addition to consolidating federal funds, PRWORA significantly changed federal child care policy by giving states maximum flexibility to design child care programs for low-income families. States have broad discretion to establish subsidy amounts, family co-payments, and eligibility limits. States set maximum reimbursement rates that consist of two parts—the state subsidy paid directly to a provider and the co-payment the family pays to a provider. These co-payments vary according to family income and size, and the amount of the state subsidy declines as the family co- payment rises. Co-payments can be waived for any eligible family whose income is at or below the federal poverty threshold, including those in the TANF program, and for children in protective services on a case-by-case basis. As of March 2001, 23 states waived co-payments for TANF families engaged in TANF or other work activities. According to federal law, states can set income eligibility limits up to 85 percent of the state median income (in 2000, this limit ranged from a low of $24,694 for West Virginia households to a high of $43,941 in Maryland), but most states set eligibility limits below that level. In the three states we visited, Oregon reported setting its income eligibility limit at 70 percent of the state median income, Maryland at 50 percent, and Illinois at 43 percent. States are not required to provide assistance to all families that fall within state-established eligibility guidelines, but they are required to give priority to children in very low-income families and to children with special needs. The program serves children up to age 13, but HHS allows states to provide child care services to children with special needs up to age 19. CCDF subsidies can be used to obtain child care from various types of providers such as child care centers and family homes. Child care centers, group homes, and family homes are most often regulated but some are legally exempt depending on the state. Table 1 provides descriptions of the types of child care providers generally used by subsidized families. States must provide subsidies through vouchers, but some states also made child care available from providers who have contracts with them.Two of the three states we visited made this option available to subsidized families. Illinois had contracts with some child care centers to serve children of subsidized families. As of June 2000, Illinois reported that contracted facilities served about 12 percent of the total number of children in the state’s subsidized child care program. Oregon contracted with child care providers primarily to serve children from targeted, at-risk families. Periodically, states adjust their reimbursement rates, co-payment levels, and income eligibility limits. These policy decisions can affect families’ access to child care providers. For example, if states set reimbursement rates too low, some providers might choose not to serve children of subsidized families. On the other hand, if states set reimbursement rates too high, some providers might replace children of nonsubsidized families with those of subsidized families. Co-payment levels are also important. For example, in Oregon, one study indicated that, in some cases, a family’s economic position worsened as a parent moved from a job paying $6 per hour to one paying $8 per hour because increases in the family’s earnings were more than offset by decreases in child care and other subsidies. HHS is charged with providing oversight, technical assistance and guidance to states, which have responsibility for administering CCDF programs. HHS requires states to submit biennial state CCDF plans that include, among other things, certification that within the past 2 years they performed a market rate survey. A market rate survey is a tool to be used by states to obtain information about providers, including the fees they charge, the type of child care they provide, the age groups of the children they serve, and where they are located. Although states are required to conduct market rate surveys every 2 years and consider the results, they are not compelled to use them in setting child care reimbursement rates. States are also required to certify that they met the equal access provision, a part of the federal law that requires states to set rates that are sufficient to provide access to child care services for eligible families that are comparable to those of families that do not receive subsidies. While HHS reviews and approves CCDF state plans, states have substantial discretion in determining the basis on which they will certify to HHS that they meet the equal access provision. HHS has authority to sanction states if they do not substantially comply with the law, but HHS officials told us that these sanctions have never been used. HHS provided guidance indicating that co-payment levels at no more than 10 percent of family income could be considered affordable and reimbursement rates set at least at the 75th percentile of providers’ fees can be presumed to provide equal access. In this case, the maximum rate paid by the state and the family would be equal to or greater than the fees charged by 75 percent of providers or for 75 percent of providers’ slots. However, states are free to set co-payments and reimbursement rates at other levels. States used the results of market rate surveys to help set child care reimbursement rates, but also reported considering other factors such as budgets in rate setting. Consistent with HHS guidance, 40 states reported that the survey results were an important consideration when setting reimbursement rates. However, 10 states did not use their most recent surveys in setting current reimbursement rates. States establish different rate schedules for geographical areas and different age groups of children. To establish their rates, states often set maximum reimbursement rates at a percentile of the distribution of providers’ fees. However, in setting their child care reimbursement rates, many states considered their budgets and other policy goals. Thirty-two states reported that their current budgets were of great importance when setting reimbursement rates. Other factors that states considered important in setting their rates included achieving policy goals such as expanding eligibility, improving child care quality, and increasing the supply of certain types of child care providers. Most states reported using their current market rate survey results to help set reimbursement rates; some states reported that they referred to less current survey information. Forty states reported that the results of their most recent market rate survey were very important in determining their current child care reimbursement rates. However, while 10 states reported that they had completed current market rate surveys as required by regulations, they used less current market rate survey results to set their rates. The market rate surveys they used were not completed within 2 years of their approved fiscal year 2001 CCDF plans. Of these, 3 states (Michigan, North Dakota, and West Virginia) considered 1999 market rate survey results, 5 states (Arizona, District of Columbia, Illinois, Iowa, and North Carolina) reported considering results from 1998 market rate surveys, 1 state (New Hampshire) considered results from 1994, and 1 state (Missouri) considered market rate survey results from 1991 and 1996. States reported that their market rate survey results primarily included data on providers’ fees from regulated child care center, family home, and group home providers. For example, 48 states surveyed regulated child care centers and 47 states surveyed regulated family home providers. In contrast, 24 states surveyed unregulated providers. Of these, 15 states reported that they obtained information about child care fees from relatives and/or other unregulated providers, such as religious-affiliated child care providers. (See fig. 2 for the types of providers that states indicated were included in their market rate surveys.) After an examination of those fees, state officials decided whether and how to divide the state into regions based on variations in providers’ fees. State officials may use a variety of methods for dividing the state into regions. As shown in figure 3, 18 states reported setting rates for multicounty regions, and 16 states set rates based on political boundaries, such as counties or municipalities. Illinois and Maryland, two of the states we visited, established reimbursement rate schedules that combined areas into multicounty regions. These regions generally consisted of counties that were not necessarily contiguous to one another but were designed to capture providers who charged similar fees. Oregon, the third state we visited, grouped zip codes with comparable providers’ fees into three reimbursement rate areas. Conversely, 14 states reported that they did not pay different reimbursement rates to providers based on their location. In some cases, officials reported they did not divide the state into regions because there was little variation in fees across the state. Most states also reported setting distinct child care reimbursement rates based on the age group of the child needing care. The states we visited, for example, had differing rates for infants and school aged children. In addition, separate rates were often used for child care providers who accepted special needs children, exceeded quality standards, or offered evening and/or weekend care. For example, 24 states reported that they had distinct child care reimbursement rates for providers whose care exceeded state quality standards. In setting their reimbursement rates, most states ranked providers’ fees by type and location of care from highest to lowest, and set maximum reimbursement rates at a percentile of these fees. HHS suggested that states set their maximum child care reimbursement rate at least at the 75th percentile based on the most recent market rate survey results. In responding to our survey, 21 states indicated that they did so. An additional 7 states indicated that they set rates at least at the 75th percentile but used a more dated survey. While states most often reported that market rate survey results were very important in setting child care reimbursement rates, they also reported that their state budget and policy goals were important factors considered when setting rates. For example, 32 states reported that the amount of their current budget was of great importance when setting child care reimbursement rates. Budgets are important because they establish a financial framework for developing programs and policy goals. State budget processes and their contributions to CCDF affect the amount of money that states choose to spend on child care. During the budget process, trade-offs occur when state decision makers must balance policy goals and program needs against available resources. One potential result of such trade-offs could be that as resources available for child care programs become constrained, more states might be reluctant to adjust their maximum reimbursement rates in line with recent market rate surveys. However, in our survey, child care officials in 27 states indicated that they expected their child care budgets to remain the same, and child care officials in 11 states expected their child budgets to increase in the next fiscal year. Some state officials told us they used income limits and family co-payments to manage child care program expenditures and to target child care subsidies. Under CCDF, states are permitted to set income eligibility limits to include families whose incomes are up to 85 percent of the state median income (SMI), but most states set their limits below the allowable federal level. They may do so to accommodate state budgetary constraints, to target poorer families, or both. In our survey, states reported setting income eligibility limits that ranged from 42 percent of the SMI (in Missouri) to 105 percent of the SMI (in Pennsylvania). States also varied co-payments to accommodate their budgets and to target certain families. In Oregon, for example, as our hypothetical family’s income increased from 75 percent to 150 percent of the federal poverty threshold, required co-payments increased from 6 percent to 18 percent of monthly income. States also considered other child care policy goals in setting their reimbursement rates. Thirty-eight states reported that they used reimbursement rates to encourage child care providers to achieve specific results such as expanding eligibility and improving child care quality. Specifically, 29 states reported that they used reimbursement rates to encourage providers to increase staff education or training, 26 states used rates to encourage providers to make general improvements in quality, 20 states used rates to encourage providers to increase access to their facilities for special needs children, and 18 states reported using reimbursement rates to encourage improvements in providers’ facilities that promote children’s health and safety. In some states, providers received higher reimbursement rates for achieving these results. The three states we visited used reimbursement rates in different ways in pursuit of specific policy goals within their child care programs. For example, Illinois encouraged child care centers to increase the number of child care slots available to low-income families with infants and toddlers by paying up to an additional 10 percent to center providers who served a large number of subsidized children 2 years old or younger. For fiscal year 2000, the state reported that an additional 390 slots for subsidized infants and toddlers were added as a result of this initiative. Illinois also implemented a statewide initiative that paid providers an additional subsidy amount to care for children with disabilities. Based on receiving the increased subsidies, providers were expected to purchase adaptive equipment and obtain specialized training to improve the care they gave these children. In Maryland, a tiered reimbursement rate program—paying different rates to child care providers based on program accreditation, staff credentialing, continued training, staff compensation, and other achievements—was established to improve the qualifications of the child care workforce, encourage parent involvement, and promote a high level of program quality. Few states reported having evaluated the effects of such uses of reimbursement rates. In the nine communities we visited, we calculated that the maximum reimbursement rates afforded hypothetical 2-person families widely different levels of access to child care providers who accepted the subsidy. The state reimbursement rates, which consist of the states’ subsidies and families’ co-payments, allowed hypothetical families, for example, to purchase care from 6 percent to 71 percent of family home providers who accepted the subsidy in these nine communities. Families generally could afford child care from a greater percentage of providers in urban communities than suburban and rural communities. In all three states, the states’ subsidies decreased as families’ incomes increased; this sometimes resulted in steep increases in family co-payments. These required co-payments ranged from 1 percent to 18 percent of a hypothetical family’s income, varying by the level of income. However, reimbursement rates may not strictly limit families’ choices among child care providers. State officials reported that families were sometimes able to make financial arrangements with formal, regulated providers whose fees exceeded state reimbursement rates. In addition, families could obtain care they needed or wanted from informal providers who were generally reimbursed at lower rates than states paid formal, regulated providers. State officials were unable to provide information on how often these circumstances occurred. The affordability of child care for hypothetical families of two (consisting of a parent and 2-year-old) varied as a result of different subsidies and co-payments in nine selected communities. Moreover, the choice that rates afforded families among available providers was generally greater in urban communities than in suburban and rural communities. The only exception was among family home providers in Maryland, where families were able to afford a greater portion of this type of care in suburban and rural communities. We visited three communities in Illinois—one urban, one suburban, and one rural. Table 2 shows the characteristics of Chicago, DuPage County, and DeKalb County. While Illinois set the same reimbursement rate for child care centers for these three communities, the extent to which the rates afforded choice among family home providers and child care centers varied widely, resulting sometimes in large differences between prevailing local fees and maximum reimbursement rates. For example, of those family home providers who accepted child care subsidies, 6 percent to 71 percent had fees that were within (i.e., equal to or less than) the maximum reimbursement rate. Of those child care centers that accepted subsidies, 30 percent to 100 percent had fees within the rate. Moreover, to provide our hypothetical low-income families with greater access to family home providers in DuPage County would require a significant increase in the state’s maximum reimbursement rate. Specifically, to allow families access to approximately 50 percent of the family home providers, the maximum reimbursement rate would need to be raised 39 percent from $466 to $650, a monthly increase of $184. See table 3 for comparisons of providers’ fees, reimbursement rates, and percent of providers accepting subsidies who charged fees within the reimbursement rate in three Illinois communities. We visited three communities in Maryland—one urban, one suburban, and one rural. Table 4 shows the characteristics of Baltimore, Montgomery County, and Wicomico County. Across the three Maryland communities, the reimbursement rates afforded our hypothetical families varied access to family home providers and child care centers. As shown in table 5, of those family home providers who accepted child care subsidies, 45 percent to 64 percent had fees that were within the maximum reimbursement rate. The percent of participating child care centers that had fees within the rate varied—from 37 percent to 68 percent. In contrast to Illinois, providing low-income families with greater access to subsidized child care in Maryland would generally require smaller increases in the states’ maximum reimbursement rates. For example, to allow families access to approximately 50 percent of the child care centers in Wicomico County, would require raising the maximum reimbursement rate 5 percent from $358 to $375, a monthly increase of $17. See table 5 for comparisons of providers’ fees, reimbursement rates, and percent of providers accepting subsidies who charged fees within the reimbursement rate in three Maryland communities. We visited three communities in Oregon—one urban, one suburban, and one rural. Table 6 shows the characteristics of Portland, Washington County, and Linn County. In Oregon, hypothetical families’ access to providers varied slightly and was limited. For example, of those family home providers who accepted child care subsidies, 10 percent to 24 percent had fees that were within the maximum reimbursement rate. Of those child care centers participating, 0 percent to 17 percent had fees within the rate. See table 7 for comparisons of providers’ fees, reimbursement rates, and percent of providers accepting subsidies who charged fees within the reimbursement rate in three Oregon communities. In the nine communities we visited, most child care providers indicated to local resource and referral offices a willingness to accept subsidized children; center providers reported a willingness to accept subsidized children more often than family home providers. As shown in table 8, 85 percent to 100 percent of child care centers reported a willingness to accept subsidies compared with 47 percent to 97 percent of family home providers across the nine communities. State officials considered the percent of child care providers who were willing to participate in subsidized child care programs an important measure of access. Results from our national survey also showed that the providers’ participation rates varied. In our survey, states estimated that the proportion of licensed child care providers who participated in their subsidized programs ranged from 23 percent to 90 percent, with a median of 69 percent. However, even though provider participation was generally high, local child care resource and referral staff told us that some providers limited the number of subsidized children they accepted at any one time and others may have required parents to pay the difference between the reimbursement rates and providers’ fees. (This last point is discussed in greater detail later in the report.) Although maximum reimbursement rates were the same for all subsidized families within a community, a family’s share of this rate, or co-payment, increased as family income increased. For example, for a family of two in Linn County, Oregon, earning $1,017 a month (100 percent of the federal poverty threshold) the maximum reimbursement rate for family home care was $340—comprised of an $85 required family co-payment and a state subsidy of $255. As the family’s income increased to $1,526 a month (150 percent of the federal poverty threshold), its required co-payment rose to $271, and the state subsidy declined to $69. The relationships among co-payments, state subsidies, and income for a family of two in Linn County, Oregon, using family home care are illustrated in figure 4. Required co-payments resulted in families paying from 1 percent to 18 percent of their income for child care across the nine communities. Oregon, which had a statewide co-payment schedule, required our hypothetical families to make the highest co-payments of the three states we visited. Regardless of where they lived, subsidized families with monthly earnings of $1,526 paid 18 percent of their income for child care. Maryland, which varied co-payment amounts by region, required families in Montgomery County to pay higher co-payments than those in Baltimore and Wicomico County. In Illinois, which also has a statewide co-payment schedule, the co-payments in every community were less than 10 percent of family income at 150 percent of the federal poverty threshold. See table 9 for monthly income, family co-payment, and co-payments as a percent of income in the nine communities. While co-payments can be considered as a percentage of family income, they can also be considered as a percentage of the total reimbursement rate; this provides some sense of the portion of the total fee borne by the family and, to some extent, the benefit of participation in the subsidy program. When considered in this way, a family’s co-payment represented from 2 percent to 80 percent of the reimbursement rate; Oregon families paid the largest share of the reimbursement rate. For example, in rural Linn County, families who earned 150 percent of the federal poverty threshold were responsible for a monthly co-payment of $271, which represented 80 percent of the reimbursement rate for a family home provider. This share was significantly larger than that paid by similar families in the rural communities of DeKalb County, Illinois, and Wicomico County, Maryland, who were responsible for paying 32 percent of the reimbursement rate for family home providers. In addition, in Oregon and Illinois, rural families paid a larger share of the reimbursement rate than families in urban and suburban communities (see table 10). Families at the lowest income levels in each community paid a relatively small share of the total reimbursement rate. Even though our analysis showed that some reimbursement rates did not afford hypothetical families much choice among specific types of child care, state and local officials noted that actual families’ child care options may not be strictly limited by the reimbursement rates. In all three states we visited, families could choose providers whose fees exceeded the state-established reimbursement rates—by paying the co-payment and the difference between the providers’ fees and the reimbursement rates. Families were responsible for these additional payments, and states were generally not part of these financial arrangements with child care providers. State officials could not provide data on how often this occurred. In other instances, state and local officials told us they believed that some regulated providers subsidized the state child care program by accepting maximum reimbursement rates as full payment—even though the rates were less than the fees charged nonsubsidized families. These officials said that some providers were willing to do so because there was more certainty in receiving state subsidies than private payments from nonsubsidized families. They also told us that some child care providers may build a loyal customer base by accepting reimbursement rates as full payment until families can afford to pay the extra amount. Again, state officials could not provide data on how often this occurred or what adjustments providers made, if any, to accommodate any such foregone revenues. Consistent with federal law, all three state child care programs also allowed subsidized families to use informal child care providers (i.e., unregulated, legally operating providers) in addition to formal, regulated providers. Subsidized families in the three states we visited varied in how frequently they chose this option. States estimated that 25 percent of subsidized families in Maryland, 57 percent in Illinois, and 60 percent in Oregon relied on informal care providers. In our survey, state officials reported that families chose informal providers for many different reasons including convenience, flexibility in hours, and lower costs. State and local officials mentioned that some informal child care providers were willing to forego co-payments because they were aware of the families’ financial circumstances. They could not provide data on how often this occurred. While subsidized families could choose informal child care arrangements, the states we visited generally set lower reimbursement rates for these providers. For example, table 11 shows that informal providers in Baltimore received a maximum reimbursement rate of $215 which was about half of the $429 received by family home providers. See appendix II for information about the reimbursement rates and family co-payments for informal providers in the other eight communities we visited. Nonetheless, states varied considerably in the distinction drawn between rates paid to informal providers and those paid to formal, regulated family home providers. In Oregon, the rates were quite close; in Illinois and Maryland, they were much further apart. States made these different choices with regard to reimbursement rates despite the lack of information they reported having on informal providers’ fees or the relationship between the rates and the supply of such care. In the three states we visited, variations in the use of informal child care providers appeared to be influenced by state policies. Illinois and Oregon reported almost the same percentage of families selecting informal providers (57 percent and 60 percent, respectively). Yet, Illinois’ maximum reimbursement rates for informal providers was only about half as high as established for regulated family homes, while Oregon’s maximum reimbursement rate for informal providers was nearly the same as for regulated family homes. Moreover, like Illinois, Maryland established maximum reimbursement rates for informal providers that were about half those for regulated family home providers, but reported a much smaller portion of subsidized families (25 percent) selecting informal child care providers. However, in Illinois, informal providers may provide full-time child care in the child’s home or in their own home. In Maryland, only relatives may provide full-time child care in their own homes without seeking state licensure, and non-related, informal providers can provide such services only in the child’s home. These policy differences may affect informal providers’ willingness to participate in the states’ subsidized child care programs. Also, according to a Maryland state official, reimbursement rates for formal providers were increased, in part, as an incentive for informal providers to become licensed. In the 6 years since passage of PRWORA and the creation of the CCDF, states have exercised broad flexibility in designing child care subsidy programs to support parents’ workforce participation by enhancing their access to affordable child care. In doing so, states have made varied choices regarding which families will be eligible for child care subsidies, how much those families must pay for child care, and how much the state will supplement these payments to offer choice among additional providers. States’ decisions on these issues involve trade-offs and may have unintended as well as intended effects. For example, in the three states we visited, income eligibility standards varied from just over 40 percent to 70 percent of the state median income. However, the state with the highest eligibility standard, perhaps as a consequence, generally offered the lowest reimbursement rates. Similarly, based on our analysis of nine communities in 3 states, we observed that states were setting reimbursement rates in ways that had widely different implications for the number and type of child care providers from which a hypothetical family could choose, even across different communities within the state. In Illinois, the same maximum reimbursement rates were established for child care providers in Chicago and neighboring DuPage County, perhaps due to concerns for compensating providers equitably across political boundaries. However, the markedly different prices charged by providers in different localities made for very large differences in the selection that these rates afforded eligible families. Finally, the issue of selection or usage is more complex than reimbursement rates alone; states’ policies such as licensing provisions are also important because they affect parents’ choices and the supply of child care providers. The Department of Health and Human Services provided written comments on a draft of this report. These comments are reprinted in appendix III. HHS took no issue with our principle findings and indicated that the report raises important questions about information that would be helpful on the potential effects of reimbursement rates on families and other aspects of the child care market. In this connection, HHS cited studies it funds—through the CCDF set aside for research, demonstration, and evaluation—and its efforts to encourage states to study the relationship between state policies (including those related to child care subsidies) and the interrelationship between state policy and child care markets. HHS also provided technical comments, which we incorporated as appropriate. As requested, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time we will send copies of this report to the Secretary of Health and Human Services, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me on (202) 512-7215. Other staff who contributed to this report are listed in appendix IV. To describe how states set reimbursement rates, we conducted a mail survey of the state child care officials in 50 states and the District of Columbia, of which 49 responded for an overall response rate of 96 percent. The survey included questions on market rate surveys and other factors that states may have considered in setting rates. While we asked state child care administrators to assess the importance of various factors in setting reimbursement rates, we did not independently verify their assessments by, for example, comparing historical data on these factors with actual state decisions. In addition to gathering this information through our survey, we interviewed state child care program officials in Illinois, Maryland, and Oregon to learn how they set reimbursement rates. We also interviewed consultants who assisted state program officials with analyzing their market rate survey results. In selecting the states for our field work, we sought to include states that had (1) child care resource and referral (CCR&R) networks with comprehensive data on providers and the fees they charged; (2) model market rate surveys; (3) varying income eligibility limits, reimbursement rates, and co-payment fees; (4) different utilization patterns for informal child care providers; and (5) some geographic diversity. We visited three states and met with officials of state, local, and community-based organizations in three locations in each state—one urban, one suburban, and one rural. Our field work was performed in Chicago, DuPage County, and DeKalb County, Illinois; Baltimore, Montgomery County, and Wicomico County, Maryland; and Portland, Washington County, and Linn County, Oregon. To determine the extent to which reimbursement rates were likely to afford hypothetical families access to specific types of child care providers, we obtained data on providers’ fees for full-time care from CCR&R network databases in each of the three states we visited. The local CCR&R offices in each of the communities we visited collected actual information on providers’ fees. The local CCR&R offices submitted the information about these fees to their networks that compiled this information throughout the state. CCR&R networks supplied us with provider fee data for each of the nine communities we visited. CCR&R databases were relied on because the data on providers’ fees were readily available and current. While we did not conduct tests for accuracy or reliability of the CCR&R databases, state officials and CCR&R staff expressed confidence in the accuracy and comprehensiveness of the data. In calculating the percentage of providers who had fees that were equal to or less than the state-established reimbursement rates, we included those providers who indicated a willingness to accept Child Care and Development Fund (CCDF) funded subsidies. This information was self- reported by most child care providers. In instances where providers did not report whether they accepted the state’s subsidy or indicate a willingness to accept the subsidy, they were included in the total number of providers in a community but were not counted as accepting the subsidy. Since Illinois provider fees were reported as a weekly rate and reimbursement rates were set on a daily basis, both sets of numbers were converted to reflect monthly provider fees and monthly reimbursement rates. Using a multiplying factor of 4.33, representing the average number of weeks in a month, we converted providers’ fees from a weekly to monthly basis. Using a multiplying factor of 21.65, representing the average number of work days in a month, we converted daily reimbursement rates to monthly rates. Because Maryland provider fees were reported as a weekly rate and reimbursement rates were set on a monthly basis, we converted the provider fees so we could compare them with the state-established reimbursement rates. Using a multiplying factor of 4.33, representing the average number of weeks in a month, we converted providers’ fees from a weekly to monthly basis. Oregon provider fee data were also reported in different time increments than the state-established reimbursement rates; however, we did not convert these fee data to a single common unit. Providers reported their fees in hourly, daily, weekly, or monthly increments; the state established hourly and monthly rates. Oregon consultants advised us not to convert provider fee data because providers who charged in different time increments may operate differently. The consultants suggested that providers who usually charge in less than monthly increments might offer slight discounts to families who use their services for a month or longer. As a consequence, we directly compared providers’ fees reported in hours and months to the state’s hourly and monthly reimbursement rates. For providers’ fees reported in days or weeks, we divided monthly reimbursement rates by 21 (slightly less than the average number of work days in a month to account for a discount) to determine daily rates. In addition, we multiplied these calculated daily rates by 5 to determine weekly rates. We discussed this approach with the consultant who conducted market rate studies for the state. Because of the complexity of converting data on providers’ fees, we did not calculate a median monthly provider fee for the three communities we visited in Oregon. In determining hypothetical families’ access to the nine communities across three states, in one case, we limited the scope of our analysis. To prevent geographical differences in income from limiting the usefulness of our analysis and because of the much larger size of the city of Chicago, we included only that area of Chicago that had a lower average median income. We selected the lower-income area based on preliminary analysis that showed a high percentage of providers in the area indicated a willingness to accept subsidies. Although some higher-income areas are covered and some lower-income areas excluded, for ease of analysis we included all contiguous zip codes south of the Chicago central business district. Since family co-payments vary by such factors as family income and family size, and the fees that providers charge also vary depending on a child’s age and the type of child care, we used a hypothetical two-person family (consisting of a parent and 2-year-old child) in our analysis. This family size was selected after reviewing fiscal year 1999 Temporary Assistance to Needy Children (TANF) recipient data that showed that most single parent families have one child, and most TANF cases that include adults have only one parent. The age of the hypothetical child was selected after reviewing CCDF recipient data on the ages of children served. To determine the percent of family income that would be spent for co- payments in the three states, we varied family income from 75 percent of the federal poverty threshold to 150 percent of the federal poverty threshold. We used the same procedure in determining the percent of the reimbursement rates represented by a family’s required co-payment. At the federal level, we interviewed officials at the Department of Health and Human Services in Washington, D.C., and regional offices in Chicago, Illinois, and Philadelphia, Pennsylvania. We reviewed documents concerning CCDF legislation, HHS rules and regulations, HHS data and reports on access for low-income families, and obtained copies of states’ CCDF plans for fiscal years 2002-2003 that contained the states’ co-payment fee structures, and generally included information about market rate survey results and reimbursement rates. We also interviewed child care policy experts and reviewed current literature on subsidized child care. For the three states we visited, we obtained data on family monthly co-payments and reimbursement rates for informal providers. These states generally did not collect information on the fees charged by informal providers. Moreover, local CCR&R offices generally did not collect information on informal child care providers or include them in their databases. As shown in tables 12 to 16, each of the three states we visited paid rates that were lower for informal care than for other types of care. States made different choices regarding such rates despite the lack of information on informal providers’ fees, or the effect of established rates on the supply of such care. See tables 12 to 16 for reimbursement rates and family co-payments for informal providers in eight communities we visited. Information on Baltimore, Maryland, is shown in table 11. The following people also made important contributions to this report: Danielle T. Jones; R. Scott McNabb; Cynthia Decker; Patrick diBattista; Joel Grossman; Elsie Picyk; Bill Keller; and Daniel Schwimer. Child Care: States Have Undertaken a Variety of Quality Improvement Initiatives, but More Evaluations of Effectiveness Are Needed. GAO-02-897. Washington, D.C.: September 6, 2002. Early Childhood Programs: The Use of Impact Evaluations to Assess Program Effects. GAO-01-542. Washington, D.C.: April 16, 2001. Child Care: States Increased Spending on Low-Income Families. GAO-01-293. Washington, D.C.: February 2, 2001. Child Care: How Do Military and Civilian Center Costs Compare? GAO/HEHS-00-7. Washington, D.C.: October 14, 1999. Child Care: Use of Standards to Ensure High Quality Care. GAO/HEHS-98-223R. Washington, D.C.: July 31, 1998. Welfare Reform: States’ Efforts to Expand Child Care Programs. GAO/HEHS-98-27. Washington, D.C.: January 13, 1998. Welfare Reform: Implications of Increased Work Participation for Child Care. GAO/HEHS-97-75. Washington, D.C.: May 1997.
Federal welfare legislation passed in 1996 placed a greater emphasis on helping low-income families end dependence on government benefits by promoting job preparation and work. To reach this goal, the legislation gave states greater flexibility to design programs that use federal funds to subsidize child care for low-income families. Under the Child Care and Development Fund, this flexibility includes the freedom to largely determine which low-income families are eligible to receive child care subsidies. These maximum rates consist of two parts--a state subsidy and family co-payment. States also establish maximum reimbursement rates for child care. States reported considering market rate survey and budget and policy goals in setting maximum reimbursement rates. All states reported conducting market rate surveys in the past 2 years that obtained data on providers' fees, but 10 states reported that they did not base the reimbursement rates for child care providers on their most recent market rate surveys. In the nine communities visited, GAO calculated that hypothetical families' access to child care centers and family home providers varied widely as a result of the different subsidies and family co-payments established by each state.
If IRS’s budget request is approved, IRS will have more than 3,400 staff years that can be assigned to new or existing activities in fiscal year 2003. These include the 1,179 additional staff years requested in the budget and the 2,287 staff years that IRS determined could be redirected elsewhere in the organization due to projected savings from several improvement projects and workload decreases. These 3,400 staff years can make a real impact on IRS’s performance if they are targeted to selected areas. However, the availability of these staff years depends on the projected savings being realized and no significant unanticipated expenses. In addition, it is difficult to evaluate the effect that these additional and redirected staff years will have on IRS’s operations because the budget is not well-linked to performance goals in some important areas. With respect to that part of the budget request for information technology, IRS (1) did not adequately support the $1.63 billion requested for operation and maintenance of its information systems but (2) did adequately support its $450 million request for business systems modernization. IRS’s fiscal year 2003 budget request is based on several assumptions that could prove optimistic. These include (1) labor and nonlabor savings of 2,287 staff years and $157.5 million from various improvement projects and workload decreases that IRS plans to use elsewhere in the organization, and (2) additional savings of $38.5 million resulting from better business practices that have not yet been identified. Also, IRS may face some unanticipated expenses that, if not funded, could cause it to revise its financial plan for fiscal year 2003. In many respects, this kind of uncertainty is the natural result of a process that requires the development of budget estimates many months before the fiscal year in question. No matter the reason, the end result could be unrealized savings or unexpected expenses that, as in the past, lead to cutbacks in planned hiring—cutbacks that historically have hit IRS’s enforcement programs the hardest. Through use of its strategic planning, budgeting, and performance management process, IRS identified a myriad of expected efficiency improvements, technological enhancements, labor-saving initiatives, and workload decreases that it projects will enable it to redirect $157.5 million in its base budget to higher-priority areas. Examples include (1) saving over $67 million from re-engineering and quality improvement efforts, such as consolidating form printing and distribution operations and updating an antiquated workload selection system to reduce or eliminate the substantial number of tax returns that are ordered but never audited, and (2) reducing the resources used for the innocent spouse program by $13.8 million due to an expected decrease in caseload. We commend IRS for taking the initiative to reassess the allocation of resources in its base budget. However, the congressional justification submitted by IRS in support of its budget request does not explain how IRS developed the labor and nonlabor savings. IRS provided us with information on the overall method used to develop the savings and explained that, in a change from IRS’s previously used top-down process, operating units determined the resource increases and decreases their programs needed. However, IRS did not provide details on how specific savings were computed, such as information on any assumptions used in developing specific estimates. In response to the secretary of the Treasury’s challenge for each Treasury bureau to review all programmatic efforts and reduce or remove those producing little or no value, IRS officials estimated that such a review could save $38.5 million. IRS’s congressional justification notes that the secretary considers this review to be a work in progress and expects bureau heads and financial plan managers “to work creatively on mid- course adjustments” until the final quarter of fiscal year 2003. Accordingly, the congressional justification provides no details on how the $38.5 million will be achieved. Any shortfall in the estimated labor and nonlabor savings or in savings from efforts to reduce or eliminate programs will only be exacerbated if IRS has to absorb unanticipated budget increases. For example, IRS officials estimated that it would cost an additional $69 million if the civilian pay raise included in this budget was increased to achieve parity with the proposed pay raise for the military. In fiscal year 2002, IRS faced unbudgeted cost increases related to rent, pay raises, security, and postage rate increases. As a result, IRS had to delay hiring revenue agents and officers, tax compliance officers, and tax specialists. According to IRS, “the lack of full funding for non-labor inflation over the years has greatly reduced the IRS ability to cover pay raise costs and other legitimate cost increases by reducing non-labor costs, leaving the IRS with the sole alternative of reducing staff.” IRS noted that “these budget constraints forced the IRS to reduce 1,364 FTEs in the 2002 plan.” Although we do not have specific evidence of how this FTE reduction affected IRS’s operations, IRS data does indicate that the number of revenue agent FTEs in its current financial plan for fiscal year 2002 (11,836) is 691 fewer than the actual revenue agent FTEs in fiscal year 2000 (12,527)—despite funding of an initiative in fiscal years 2001 and 2002 that, among other things, was to increase the number of revenue agent FTEs. The Government Performance and Results Act of 1993 requires agencies to establish linkages between resources and results. With this requirement, Congress hoped to focus agencies on achieving better results for the American public. Congress also hoped to gain a better understanding of what is being achieved in relation to what is being spent. In some respects, IRS’s congressional justification has good links between the resources being requested and IRS’s performance goals. For example, IRS’s budget includes an increase of 213 FTEs and $14.1 million to improve its telephone level of service, and its performance measures show an expected increase in toll-free telephone level of service from 71.5 percent in fiscal year 2002 to 76.3 percent in fiscal year 2003. However, in other important areas, the congressional justification is not well-linked to performance goals. In some instances, there are no performance goals against which Congress can hold IRS accountable. In other instances, there seem to be inconsistencies between the amount of resources being requested and the expected change in performance or workload. A significant example of missing performance goals involves IRS’s efforts to address major areas of systematic noncompliance. In February 2002, the commissioner of Internal Revenue identified four such areas: (1) misuse of devices, such as trusts and passthroughs, to hide income; (2) use of complex and abusive corporate tax shelters to reduce taxes improperly; (3) failure to file and pay large accumulations of employment taxes; and (4) erroneous refund claims, which include claims made under the Earned Income Credit (EIC) program. The budget request includes increased resources for compliance but, except for the EIC program, it is unclear from IRS’s congressional justification how many resources IRS intends to devote to each of these problems. And, for none of these areas, including the EIC program, does the congressional justification include performance measures and goals that Congress can use to assess IRS’s progress in addressing these major compliance problems. IRS’s congressional justification is clear about the amount of resources IRS plans to devote to EIC compliance efforts because the budget request calls for the continuation of a separate appropriation for that program. If approved, it will be the sixth year of targeted funding for the EIC program. IRS’s compliance efforts under this program have prevented the payment of hundreds of millions of dollars of improper EIC claims. However, the most recent IRS information shows that the rate of EIC noncompliance is still very high. According to IRS’s report on its analysis of EIC compliance rates on tax year 1999 returns filed in 2000, (1) about one-half of the 18.8 million returns on which taxpayers claimed the EIC involved overclaims and (2) of the estimated $31.3 billion in EIC claims made by taxpayers who filed returns in 2000, between $8.5 billion and $9.9 billion should not have been paid. Audit coverage is another area where performance goals would help Congress assess IRS’s progress. IRS states in its congressional justification that it will increase the resources for stabilizing audit rates by 368 FTEs and $24 million. Although the congressional justification states that audit rates have fallen, the justification does not include any information about current audit rates or what rates IRS expects to achieve in 2003. Given the amount of resources that could be involved in dealing with the four major compliance problems cited by the commissioner and increasing overall audit coverage, the Subcommittee may want to ask IRS to provide (1) more specifics on the level of resources it plans to devote to each of these areas and its performance measures and goals for each area and (2) its views on maintaining a separate appropriation for the EIC versus combining in one appropriation those resources with the resources being requested for other compliance work, which could give IRS more flexibility in deciding how best to allocate its resources among all of its compliance needs. The budget request and performance goals included in the congressional justification are, at times, inconsistent. Some of those inconsistencies might suggest that additional resources beyond those identified by IRS are available for redirection. Specific examples of inconsistencies include the following: A requested increase of 476 staff years and $20.7 million for “increased Offer-in-Compromise cases” is inconsistent with IRS’s performance goal for that program, which shows that the number of cases processed is expected to decrease from 185,000 in 2002 to 104,600 in fiscal year 2003. This requested increase also conflicts with our recent evaluation of the program that shows that IRS projected that the number of staff years needed would decrease from 1,818 in fiscal year 2002 to 1,224 in fiscal year 2003. In response to our question about this, IRS officials said that the staff year increase is to replace revenue officers who currently handle the cases so there is not a net increase in staff years for the offer program. This does not help explain why IRS is asking for an increase in resources when the workload is expected to decline and IRS had projected a decreased need for staff in the program. According to IRS’s budget request, the field and electronic/correspondence exam units will receive about the same number of staff years as the year before, while in terms of dollars, the field exam unit will receive an increase of less than 3 percent and the electronic/correspondence unit will receive an increase of about 7 percent. However, IRS’s performance measures show the field exam unit is expected to examine 33 percent more individual returns and almost 35 percent more business returns while the electronic/correspondence unit is expected to increase the number of correspondence examinations by 32 percent. It is not clear from the congressional justification how IRS expects to do so much more work with just a small increase in resources. IRS told us that one reason for the apparent inconsistency is that correspondence audits run on a 2- year cycle, with a high number of case starts in one year and a large number of case closures in the next year. IRS’ budget request includes an additional 197 staff years and $8.3 million for processing a projected growth in the total number of primary returns filed from about 225.9 million returns in fiscal year 2202 to about 230.0 returns in fiscal year 2003. However, according to IRS’s performance measures, that projected growth is the net of an increase of about 7.6 million returns filed electronically and a decrease of about 3.4 million returns filed on paper. That decline in the more costly to process paper returns would seem to argue against the need for additional processing resources. In response to our question about this, IRS acknowledged that the number of paper returns was expected to decline but said, nonetheless, that its computation of the number of additional FTEs needed was “based on an estimate of direct hours needed to process expected paper returns.” Because the congressional justification provides inadequate information to explain the apparent inconsistencies discussed in the preceding section and because, in some respects, those inconsistencies suggest that additional resources might be available for redirection to other purposes, the Subcommittee may want to ask IRS for additional information in support of those parts of its budget request. IRS is requesting $2.13 billion and 7,449 staff years in information technology (IT) resources for fiscal year 2003. This includes (1) $450 million for the agency’s multiyear capital account that funds contractor costs for the Business Systems Modernization (BSM) Program, which is adequately justified, and (2) $1.68 billion and 7,449 staff years for information systems, of which $1.63 billion for operations and maintenance is not adequately justified. With respect to the $1.63 billion request for operations and maintenance, IRS was unable to provide sufficient support for us to identify possible budget reductions. Key provisions of the Clinger-Cohen Act, the Government Performance and Results Act, and Office of Management and Budget (OMB) guidance on budget preparation and submission (e.g., Circular No. A-11) require that, before requesting multiyear funding for capital asset acquisitions, agencies develop sufficient justification for these investments. This justification should reasonably demonstrate how proposed investments support agency mission operations and provide positive business value in terms of expected costs, benefits, and risks. Since the BSM appropriation was established in fiscal year 1998, we have consistently reported that IRS has not developed adequate justification for its budget requests, and we have proposed that Congress consider reducing them. During this same time, we have repeatedly recommendedthat IRS put in place an enterprise architecture (modernization blueprint) to guide and constrain its business system investments. Use of such a blueprint is a practice of leading public and private sector organizations. Simply stated, this architecture provides a high-level roadmap for business and technological change from which agencies can logically and justifiably derive their budget requests and capital investment plans. In response, IRS has developed various versions of an enterprise architecture, which we have continued to review and make recommendations for improvement in. IRS recently approved a new version of this architecture (version 2.0), which, based on a briefing to us and others, appears to provide robust descriptions of IRS’s current and target business and technology environments. IRS has also drafted, and executive management is reviewing, the associated high-level transition plan that identifies and conceptually justifies needed investments to guide the agency’s transition over many years from its current to its target architectural state. IRS’s $450 million request is based on its enterprise architecture as well as related life cycle management and investment management process disciplines for its ongoing project investments. As such, this request is grounded in analyses that meet the statutory and regulatory requirements for requesting multiyear capital investment funding. Pursuant to statute, funds from the BSM account are not available for obligation until IRS submits to the congressional appropriations committees for approval an expenditure plan that meets certain conditions. In November 2001, IRS submitted its fifth expenditure plan seeking approval to obligate the $391 million remaining in the BSM account at that time. In briefings to the relevant appropriations subcommittees and IRS, we reported our concerns about the escalating risk that IRS will be unable to deliver promised BSM system capabilities on time and within budget due to the number and complexity of ongoing and planned systems acquisition projects and the continued lack of certain key modernization management controls and capabilities. In approving the expenditure plan, the appropriations subcommittees directed IRS to reconsider the scope and pace specified in the November 2001 expenditure plan to ensure that the number and complexity of modernization projects underway is commensurate with IRS’s management capacity and fully establish and implement all process controls needed to effectively manage the modernization effort prior to the submission of IRS’s next expenditure plan. In response to these and other concerns raised by the appropriations committees and us, IRS has committed to aligning the pace of the BSM program with the maturity of the organization’s management controls and management capacity and is currently conducting a reassessment of the projects it plans to deploy during fiscal year 2002. In addition, IRS has taken appropriate steps toward implementing missing management controls. Leading private and public sector organizations have taken a project or system-centric approach to managing not only new investments but also operations and maintenance of existing systems. As such, these organizations identify operations and maintenance projects and systems for inclusion in budget requests; assess these projects or systems on the basis of expected costs, benefits, and risks to the organization; analyze these projects as a portfolio of competing funding options; and use this information to develop and support budget requests. This focus on projects, their outcomes, and risks as the basic elements of analysis and decisionmaking is incorporated in the IT investment management approach recommended by OMB and us. By using these proven investment management approaches for budget formulation, agencies have a systematic method, based on risk and return on investment, to justify what are typically very substantial operations and maintenance budget requests. These approaches also provide a way to hold IT managers accountable for operations and maintenance spending and the ongoing efficiency and efficacy of existing systems. IRS did not develop its information systems request in accordance with these best practices of leading organizations. In particular, the largest elements of IRS’s budget request are not projects or systems. Rather, they are requests for staffing levels or other services. For example, IRS is requesting $240 million for staff and equipment supporting operations and maintenance of desktop computers agencywide, as well as $111 million for staff and equipment supporting its major computing centers’ operations. Further, it is requesting $266 million for telecommunications services contracts. Taken together, these three initiatives constitute about 38 percent of the total $1.63 billion being requested for operations and maintenance, but the budget request gives no indication regarding how these initiatives are allocated to systems. In addition, in developing these requests, IRS did not identify and assess the relative costs, benefits, and risks of specific projects or systems in these areas. Instead, according to IRS officials, they simply took what was spent last year in the categories and added the money to fund cost-of-living and salary increases. IRS officials responsible for developing the IT operations and maintenance budget attributed the differences between IRS practices and those followed by leading organizations to the lack of an adequate cost accounting system, cultural resistance to change, and a previous lack of management priority. To better justify future budget requests, these officials said that they have assessed the strengths and weaknesses of IRS’s budgeting and investment management processes against our IT investment management framework and found significant weaknesses in 15 critical areas. To address the weaknesses, IRS is currently developing capital planning guidance based on our IT investment management framework. This guidance is to be issued by late summer 2002, but a schedule for implementing it has yet to be determined. In addition, IRS has adopted and is in the process of implementing a cost model that is to enable it to account for the full costs of operations and maintenance projects and determine how effectively IRS projects are achieving program goals and mission needs. IRS plans to have the cost model in place and operational by June 30 of this year so that it can validate its fiscal year 2003 information systems appropriation request and begin using it to develop the fiscal year 2004 request. The key to making these plans reality is overcoming the very reasons that have allowed this budgetary formulation and justification weakness to continue unabated—accounting system limitations, cultural resistance, and low management priority. Although IRS has initiated actions to address these weaknesses, we are concerned whether they will be implemented in time to have meaningful impact on formulation of the fiscal year 2004 budget request. For example, IRS has not yet developed a plan and schedule for implementing its IT capital planning guidance. In addition, IRS officials told us that they are already beginning the process to develop the fiscal year 2004 budget. Consequently, until IRS overcomes its obstacles, its future information systems appropriation requests, like its fiscal year 2003 request, will not be adequately justified. To aid IRS in overcoming the barriers to changing how it develops and justifies its information systems appropriation request, we recommend to the commissioner of internal revenue that IRS prepare its fiscal year 2004 information systems budget request in accordance with leading organizations’ best practices. So far this filing season, IRS has processed returns smoothly with one major exception, seen continued growth in electronic filing, and achieved some improvements in telephone service. The one exception to smooth processing has been the large number of errors taxpayers are making related to the rate reduction credit. Although the errors have not affected the timeliness of processing, they have resulted in a significant error correction workload for IRS, the rejection of some electronically filed returns, and an increased demand for telephone assistance that, according to agency officials, is affecting taxpayers’ access to IRS’s telephone assistors. One issue that continues to affect IRS’s ability to assess its filing season performance is missing performance measures. While IRS has measures that provide useful information on some aspects of its service and is making efforts to improve its performance measures, some measures of telephone service are constructed in a way that misses important aspects of the activity being measured and IRS has delayed implementation of some accuracy measures for services provided at walk- in offices. This filing season, IRS experienced very few of the kinds of processing problems, such as those caused by computer programming errors, that it has often experienced at the beginning of a filing season, and the number of returns filed electronically continues to grow. The one major negative in this otherwise positive picture has been the significant number of returns IRS has received with errors related to the rate reduction credit. The Economic Growth and Tax Relief Reconciliation Act of 2001 (P.L. No. 107-16) directed the secretary of the Treasury to issue advance tax refunds to eligible taxpayers. Accordingly millions of taxpayers received checks of up to $600 between July and December 2001. Taxpayers who did not receive an advance refund as part of that process or who received less than the maximum allowed by law may have been entitled to a rate reduction credit when filing their tax year 2001 returns in 2002. Accordingly, IRS added a line to the individual income tax forms for eligible taxpayers to enter a credit amount and provided a worksheet for taxpayers to use in determining if they were eligible. So far, during the 2002 filing season, the rate reduction credit has led to millions of tax returns with errors. The result has been significant error-correction workloads for IRS and a large increase in the number of error notices sent to taxpayers. In retrospect, at least some of these errors might have been avoided if IRS had taken certain steps to better help taxpayers deal with this new tax return line item. One of the steps IRS took to deal with the large number of errors related to rate reduction credit was to reject certain electronic submissions involving rate reduction credit errors. Even so, electronic filing has continued to grow—although not at a rate that would allow IRS to meet its long-term goal. As table 1 shows, of the approximate 46 million returns that IRS had processed as of March 15, 2002, about 4.7 million, or 10 percent, had errors made by taxpayers or their return preparers—more than twice the error rate at the same time last year but roughly comparable to the error rate IRS expected. Of the approximate 4.7 million returns with errors, about two-thirds, or 3.1 million, had errors related to the rate reduction credit. Taxpayers and return preparers are making various types of errors related to the rate reduction credit. Many taxpayers who did not receive an advance of their rate reduction credit in 2001 and thus should be claiming the credit on this year’s return, are not. Other taxpayers are recording the amount of the credit they received in 2001 on the rate reduction credit line of this year’s return instead of recording zero. And other taxpayers, who are entitled to a credit and are claiming one, are incorrectly computing the amount to which they are entitled. Once IRS recognized that taxpayers and preparers were having problems with the rate reduction credit, it took immediate action in an attempt to minimize future errors and avoid refund delays. IRS posted information to its Web site, began a public awareness campaign that included news releases to media outlets, and provided clarifying information to preparers who file returns electronically. Despite IRS’s efforts, the rate at which taxpayers and return preparers are making errors related to the rate reduction credit has remained relatively constant. Because IRS anticipated an increase in errors this year and because IRS has been able to correct the rate reduction errors relatively quickly, we are not aware of any adverse impact on IRS’s ability to process returns and refunds in a timely manner as a result of the increased error-correction workload. IRS is treating these errors as “math errors”; that is, it corrects the mistake and either adjusts the taxpayer’s refund or notifies the taxpayer of additional tax owed. However, it remains to be seen what happens around April 15, when the largest volume of paper returns are filed. Even if IRS is able to effectively correct the large volumes of erroneous returns throughout the filing season, there are costs involved, including the cost of generating and mailing several million error notices to affected taxpayers and the costs of the resources IRS had to devote to working the increased error-correction workload. Although IRS took several steps after the filing season began in response to the large number of rate reduction credit errors, we believe, in retrospect, that some of those errors might have been prevented if the instructions for Forms 1040, 1040A, and 1040EZ had been more clear. For example, IRS did not highlight the rate reduction credit or the new line on the tax form related to the rate reduction credit on the cover page of the instructions, where IRS alerts taxpayers to changes from the prior year. Instead, IRS highlighted the fact that tax rates were reduced. Only if taxpayers read the paragraph under the highlighted caption “Tax Rates Reduced” would they see mention of the credit. The instructions for Forms 1040, 1040A, and 1040EZ might have also been clearer if IRS had included some information that was included on its Web site. In that regard, the instructions indicate that if a taxpayer received— before any offset—an amount equal to either $600, $500, or $300 based on his or her filing status, the taxpayer is not entitled to a rate reduction credit. There is no further explanation of the term “before any offset”—a term that may be unclear to many taxpayers. However, IRS’s Web site spells out more clearly what is meant by this term, explaining that if taxpayers had their advance payment offset to pay back taxes, other government debts, or past due child support, they cannot claim the rate reduction credit for the amount that was offset. Although the Web site includes this more descriptive information, there is no guarantee that a given taxpayer either has access to or will use the Web site. In retrospect, including the same explanation of “before any offset” in the instructions would have made the instructions clearer. Another step IRS took that has reduced its error-correction workload due to the rate reduction credit was to begin rejecting electronic submissions that involved certain types of errors related to the credit. By doing so, IRS required the taxpayer or return preparer to correct the error before IRS would accept the electronic return. This is consistent with IRS’s traditional practice of rejecting electronic submissions that contain other errors, such as incorrect Social Security numbers. IRS began rejecting electronic submissions with errors involving the rate reduction credit around the beginning of February. As of March 24, 2002, IRS had rejected about 226,000 such submissions. We do not know whether these rejected submissions caused potential electronic filers to file instead on paper. However, as shown in table 2, the number of individual income tax returns filed electronically as of March 29, 2002, has grown by 14.0 percent—an increase over the rate of growth at the same time last year. While this kind of increase is not insignificant, IRS will need larger increases in the future if it is to achieve its goal of having 80 percent of all individual income tax returns filed electronically by 2007. To encourage more electronic filing in 2002, IRS, among other things mailed letters to about 250,000 tax professionals, asking those who had been filing electronically to continue supporting the program and encouraging others to file electronically; mailed about 23 million postcards to certain taxpayers, such as those who had received TeleFile packages in the past 2 years but did not file their tax returns via TeleFile, alerting them to the benefits of electronic filing; and made changes to one program that enabled electronic filers to sign their returns using a personal identification number (PIN) and reinstituted another PIN-based signature program. IRS also redirected its marketing efforts to encourage persons who have been preparing tax returns on a computer but filing on paper to file electronically. Considering that about 40 million computer-prepared returns were filed on paper in 2001, conversion of those returns to electronic filings could go a long way toward helping IRS achieve its 80- percent goal. In our report on the 2001 filing season, we recommended that IRS directly survey tax professionals and taxpayers who file computer-prepared returns on paper to get more specific information on why they are not filing electronically. We have been told that IRS will be undertaking such a survey in the near future. So far this filing season, taxpayers in the queue for telephone assistance are spending less time waiting to talk with an assistor and are getting accurate answers to their tax law questions more often than last year. At the same time, however, the overall rate at which callers are reaching an assistor is lower because many callers are unable to get into the queue for assistance. Telephone assistance is a significant part of IRS’s work. This fiscal year, IRS expects to answer about 108 million telephone calls, about 72 million to be answered via automated services and about 34 million to be answered by about 10,000 full-and part-time telephone assistors, called customer service representatives. Accordingly, the ease with which taxpayers reach IRS by telephone and the accuracy of the assistance they receive are important indicators of how well IRS is performing. IRS’s performance in providing this service has been a perennial problem, and its struggles to improve service have been a topic at hearings held by this Subcommittee for many years. As we reported in December 2001, IRS has made limited progress toward its long-term goal of providing taxpayers “world-class customer service”—service comparable to the best provided by other organizations. In recent years, IRS has made significant strides in developing performance measures to tell how well it is serving taxpayers by telephone. IRS has established a set of measures to focus efforts on enhancing taxpayers’ access to accurate assistance. As shown in table 3, some of these measures indicate significant improvements in taxpayer service when compared to the same period last year. For example, during the first 11 weeks of the 2002 filing season, taxpayers, on average, waited a minute-and-a-half less to speak to an assistor, there was an 18 percentage point improvement in taxpayers reaching assistors in 30 seconds or less, and the quality of tax law assistance, which involves following IRS procedures and providing accurate responses, improved about 11 percentage points. However, there was a 5-point decline in the percentage of callers that attempted to reach an assistor and actually got through and received service (referred to as the customer service representive (CSR) level of service). According to IRS officials, an increased demand for assistance related to the rate reduction credit has been a key factor affecting taxpayer access to assistors. (See appendix I for more detail on the level of access this filing season compared to last and the likely impact of the rate reduction credit.) The increased call volume was not allowed to lengthen the queue. Instead, taxpayers were provided access to automated services, which often results in callers hanging up, or were advised by a recorded message that IRS could not provide assistance. According to IRS officials, several IRS efforts have contributed to improvements in telephone performance. For example, IRS implemented a strategy to improve tax law accuracy that included hiring and training assistors earlier than in past years and putting them on the telephones in December to help hone their skills before the filing season began. IRS also required assistors to be certified that they successfully completed necessary training and could accurately answer calls in their assigned topics and used its computer-based call routing system to help ensure that assistors answered calls only in those topics for which they had been certified. Some officials opined that improvements in accessibility may be linked to IRS’s efforts to establish new performance measures and goals for the call sites this year. For example, each site has a goal for the total number of calls its assistors are to answer in a fiscal year. IRS officials say the new measures have led to improved performance by giving the call sites a clearer understanding of what they are expected to achieve and how their performance helps IRS achieve its goals. IRS executives in the Wage and Investment and Small Business/Self-Employed divisions said that they believe that IRS has been successful in getting employees at all levels of the telephone service organizations to understand and accept the measures and contribute to achieving the goals. IRS officials cited several other service improvement efforts as potentially boosting performance, including initiatives to bring more highly skilled employees on board, increased specialization at the assistor and call site levels, and reduced hours of service to increase the number of assistors available to answer phone calls during the hours when most taxpayers call IRS. We will monitor these and other factors that may have affected IRS’s telephone service as we continue to assess the 2002 filing season. Although IRS’s telephone performance measures provide useful information on some aspects of service to taxpayers, the measures miss other aspects. For example: None of the measures currently reflect how many callers hung up while listening to the menu they hear when calling IRS—although IRS has that data. For example, as of March 16, 2002, according to IRS data, over 7.2 million callers had hung up when listening to the menu this filing season—almost three times greater than the number that hung up last year. IRS officials said it is unclear why more taxpayers were hanging up. However, when IRS streamlined the menu in mid-February, it noted a decline in the hang-up rate, which may indicate that taxpayers were frustrated or confused by the menu. Although IRS assists many callers through automated services—almost 18.2 million calls were answered by automation on the three main assistance lines and the TeleTax line as of March 2, 2002—IRS’s measures only deal with the service provided by assistors. IRS discontinued measuring the level of service provided through automation because this year’s data are not comparable to 2001. Contrary to what its name implies, the CSR level of service measure does not reflect only those calls handled by assistors. Some calls handled through automation are counted as having been answered in computing this measure. Because it includes calls answered through automation, the CSR level of service measure may be overestimating the rate at which assistors are responding to taxpayers. Because we recognize that it is important to limit the number of performance measures to the vital few, we are not recommending that IRS take any action at this time with respect to the matters discussed above. At your request, Mr. Chairman, we are reviewing IRS’s filing season performance measures, including its telephone measures, and plan to issue a report later this year on our results. Taxpayers who visit any one of IRS’s 400 plus Taxpayer Assistance Centers (TAC) can make payments, obtain tax forms and publications, get answers to tax law questions, and get help resolving tax account issues and preparing tax returns. In the past, IRS has used its employees to measure the accuracy of tax law assistance provided by its TACs. In fiscal year 2002, IRS began using contract reviewers in lieu of its employees. Although the accuracy rate reported through mid-March 2002 is encouragingly high, the use of different measurement methodologies precludes valid comparison to the low accuracy rates reported by IRS and the Treasury Inspector General for Tax Administration (TIGTA) in 2000 and 2001, respectively. IRS had planned to begin measuring the accuracy of account and return-preparation assistance in January 2002, but those plans have been delayed until June. Contract reviewers, posing as taxpayers, reported making 388 random visits to TACs between January 1 and March 15, 2002. During each visit, the reviewers asked two tax law questions from the slate of four questions that IRS developed for use this year. One question and a related scenario was developed from each of four tax law categories that most prompted taxpayers to call IRS’s toll-free assistance lines in fiscal year 2001. The contract reviewers reported receiving accurate responses for 652 of the 776 questions or 84 percent. Although this could indicate that accuracy is improving compared to the low accuracy rates reported by IRS in 2000 (24 percent) and TIGTA in 2001 (51 percent), the use of different accuracy measurement methods in the last three filing seasons does not afford a valid basis for comparison. Although the results in each of the 3 years were based on visits to TACs by persons posing as taxpayers, there were differences in such things as the questions the persons asked, the number of weeks covered by the reviews, and the number of sites visited and how they were selected. IRS had planned to begin measuring the accuracy of account- and return- preparation services provided by TACS in January 2002. However, according to field assistance officials, staffing of eight new positions for doing these reviews was initially delayed by an oversight in the announcement process and then by a hiring freeze. Officials now expect to fill the eight positions by June 2002, which, they believe, will still allow time to complete enough quality reviews to establish meaningful fiscal year 2002 baselines for both measures. According to the Director, Field Assistance, the new staff would first complete post-reviews of returns prepared during the filing season. Because most account assistance occurs after the filing season, they would then begin reviewing the accuracy of account assistance provided over the remainder of the year. Mr. Chairman, that concludes our statement. We would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. As noted earlier, despite some significant improvements in telephone service, the customer service representative (CSR) level of service as of March 16, 2002, was lower than at the same point in time last year. The week-to-week comparisons in figure 1 show that CSR level of service during the first 6 weeks of this filing season was significantly better than or about the same as during the first 6 weeks of the 2001 filing season but was significantly worse during the next 3 weeks. In the following 2 weeks, CSR level of service returned to levels comparable to last year’s performance.
This testimony discusses the Internal Revenue Service (IRS) fiscal year 2003 budget request for the 2002 tax filing season. GAO found that IRS's plans for hiring and redirecting staff may be optimistic because budgets are prepared so far in advance of the fiscal year involved. IRS assumed (1) labor and nonlabor savings of 2,287 staff years and $157.5 million and (2) additional savings of $38.5 million from better business practices. IRS's justification does not always adequately link the resources being requested and the agency's performance goals. Although IRS provided adequate support to justify the $450 million request for its multiyear capital account for business systems modernization, it did not adequately support $1.63 billion of the $1.68 billion requested for its information systems. In the area of agency performance, GAO found that IRS has generally processed returns smoothly and seen continued growth in electronic filing. The one exception to smooth processing has been the large number of errors related to the rate reduction credit. IRS has had to correct millions of returns due to the credit, and taxpayers' call about the credit have greatly increased the demand on IRS's toll-free assistance lines. IRS's performance measures provide useful information to assess its success in assisting taxpayers. However, some measures of telephone service miss important aspects of the activity being measured, and plans to begin measuring some important aspects of IRS's walk-in service have been delayed.
The North Atlantic Treaty was signed on April 4, 1949, by 12 European and North American countries to take measures against the emerging threat the Soviet Union posed to the democracies of Western Europe. Of indefinite duration, the treaty created a political framework for an international alliance obligating its members to prevent or repel aggression, should it occur against one or more treaty countries. Article 10 of the treaty provides for the possibility of accession by any other European state in a position to further the principles of the treaty upon the unanimous agreement of the current members; it contains no explicit criteria an aspiring member must meet to join NATO. The PfP program was a U.S. initiative launched at the January 1994 NATO summit in Brussels as a way for the alliance to engage the former members of the Warsaw Pact and other former communist states in Central and Eastern Europe. The objectives of the partnership, stated in NATO’s Partnership Framework Document, are to (1) facilitate transparency in national defense planning and budgeting processes; (2) ensure democratic control of defense forces; (3) maintain the capability and readiness to contribute to crisis response operations under the United Nations and other international organizations; (4) develop cooperative military relations with NATO for the purposes of joint planning, training, and exercises for peacekeeping; search and rescue; and humanitarian operations; and (5) develop forces that are better able to operate with NATO members. NATO also uses PfP to support countries interested in NATO membership. In July 1994, the United States launched the Warsaw Initiative to support the objectives of the Partnership. According to joint DOD and State Department guidance, the objectives of the Initiative are to (1) facilitate the participation of partner states in exercises and programs with NATO countries, (2) promote the ability of partner forces to operate with NATO, (3) support efforts to increase defense and military cooperation with Partnership partners, and (4) develop strong candidates for membership in NATO. The Initiative is jointly funded and administered by DOD and the State Department. A total of 29 nations have joined the Partnership, and 3 have since joined NATO. The partner states range from mature free market democracies in the European Union, such as Finland and Sweden, which have relatively advanced military technologies that do not receive and have no need for Warsaw Initiative assistance, to autocratic command economies with outdated military structures such as Uzbekistan, and others such as Georgia that are greatly dependent on Western security assistance for their reform efforts. (Fig. 1 shows the overlapping memberships of NATO, EU, MAP, and PfP members.) Each partner participates in activities to the extent it desires and assembles a unique annual work program by selecting from a variety of activities listed in NATO’s annual partnership work program, a compendium of activities offered by donor countries. For those states that have formally expressed their interest in joining the Alliance, NATO has developed a Membership Action Plan to help them become better candidates. (MAP countries are identified in figure 1.) The MAP builds upon Partnership activities, helps ready these states for the full range of NATO missions, and requires additional planning by the partner country and review by NATO. Countries provide assistance to partner states primarily through bilateral arrangements in order to meet the requirements identified in the work program. Since the beginning of the alliance in 1949, NATO has held out the prospect of membership to other nations as changing political and strategic circumstances warranted. NATO has expanded on four occasions since 1949, adding seven new European members. The first three expansions took place during times of confrontation with the Communist bloc, particularly the Soviet Union, and were undertaken to meet pressing strategic and security needs. A significantly different strategic environment marked the fourth and latest expansion, wherein NATO’s goal was to extend stability eastward into the political vacuum left after the collapse of the Soviet Union. (Fig. 2 shows the countries that have joined NATO since 1949, as well as MAP and PfP members.) In 1952, Turkey and Greece joined NATO for strategic reasons; the Korean War was at its height, and the United States wished to shore up NATO’s southern flank to forestall similar Communist military action in Europe. West Germany acceded in 1955, after it agreed to maintain large NATO forces on its territory and to place its national army within NATO’s integrated command structure. Spain joined the alliance in 1982 at NATO’s invitation. NATO wanted to gain better access to Spain’s air and naval bases, while the newly democratized Spain sought membership as a means to better its chances to join the European Economic Community. In 1991, NATO redefined its strategic concept to reflect the post-Cold War geopolitical landscape and to pursue greater cooperation with its former adversaries to the east. NATO committed itself in January 1994 to enlarging its membership to include the newly democratic states of the former Communist bloc. In 1999, the Czech Republic, Hungary, and Poland joined NATO in fulfillment of this commitment. Between 1994 and 2000, the Warsaw Initiative provided assistance worth about $590 million to 22 partner states to support equipment grants, training, exercises, information technology, and other activities to make these countries’ militaries better able to operate with NATO and contribute to NATO’s missions. Moreover, a large portion of this funding was allocated to five programs, and about 70 percent has been devoted to the 12 partner nations that had formally declared an interest in joining NATO. In this same time period, the United States provided to the partner states additional security assistance totaling over $165 million outside the framework of the Warsaw Initiative but complementary to its objectives. About 90 percent of the approximately $590 million in Warsaw Initiative funds ($530 million) has funded five programs. The largest program provides nonlethal military equipment and training. The other programs support military exercises, information technology programs, a defense education institute, and a defense resource management system. See table 1 for the costs of these five programs. Appendix I contains details on other Warsaw Initiative interoperability programs. Funding for military equipment and training was used to provide communications, search and rescue, mountaineering, and mapping equipment, along with field gear, air defense radar systems, and computers; training for English language, noncommissioned officer development, vehicle maintenance and logistics, and other purposes. According to State Department documents and a DOD-sponsored study, this equipment and training have directly contributed to partner country participation in NATO-led peacekeeping operations in the Balkans. For example, this funding provided communication equipment to Romania for engineering units in the NATO-led Stabilization Force (SFOR) in Bosnia; air traffic management systems to Hungary, which supported Operation Allied Force; fuel, supplies, and construction assistance to Ukraine to support the initial deployment of a battalion for peacekeeping duties in the Kosovo Force/International Security Force (KFOR) in Kosovo; and an automated logistics system to Poland to help deploy its military units in peacekeeping operations. Of all the interoperability programs supported by the Warsaw Initiative, military exercises were typically cited in Defense-sponsored studies and by U.S. and international officials as the most useful of partnership activities. Exercises range from search and rescue simulations to joint multinational amphibious landing exercises. Exercises have grown in complexity and sophistication as the skills and experiences of partner participants have grown. For example, the United States annually conducts Exercise Combined Endeavor. In the 1995 exercise, 10 countries participated in a demonstration of the use of common communications equipment. In the 2000 exercise, 35 countries participated in the identification, testing, and documentation of communications interoperability between NATO and PfP communication networks. The Partnership Information Management System (PIMS) created an information management and communications system among Partnership members that stores and disseminates all types of data relevant to the PfP community. The system has been used to support military exercises, civil- military emergency planning, military medical education, environmental security activities, and provides e-mail capabilities and other basic information management capabilities. The system currently links 18 partner capitals and NATO and is augmented by networks that include ministries of defense, national defense academies, other international organizations, and U.S. and NATO military commands. The Marshall Center is a jointly funded U.S.-German defense educational institution that focuses on the resolution of security issues involving Atlantic, European, and Eurasian countries. The Center offers post- graduate studies, conferences, research programs, foreign area studies, and language courses to civilian and military professionals from more than 40 countries. Warsaw Initiative funding supports the Marshall Center’s annual conferences for PfP members on topics ranging from defense planning and management to civil oversight of the military. DOD’s Defense Resource Management program creates models for individual partner countries to help restructure their militaries. Initially, DOD conducts a 6-month study in the subject country to help it develop a rational defense program linked to strategic assessments and budget constraints. Thereafter, the Department conducts short follow-up visits to provide technical assistance and help implement a defense resource management system. The objectives of the program include exposure of partner countries to defense management systems similar to those of NATO members. The program also aims to help partner states’ civilian officials assert control over their military structures by making defense management more transparent. About 70 percent of the Warsaw Initiative’s approximately $590 million in assistance has been provided to the 12 partner states that have joined or declared their intention to join NATO. Approximately twenty-six percent of all Warsaw Initiative assistance between 1994 and 2000, or $153 million, went to Poland, Hungary, and the Czech Republic—the three former Warsaw Pact states that joined NATO in 1999. Almost 44 percent of that funding, or $258 million, has gone to the nine MAP states of Albania, Bulgaria, Estonia, Latvia, Lithuania, Macedonia, Romania, Slovakia, and Slovenia. The remaining funding, $178 million, has supported Partnership activities in Croatia and countries that were once part of the former Soviet Union— Belarus, Georgia, Kazakhstan, Kyrgyzstan, Moldova, Russia, Turkmenistan, Ukraine, Uzbekistan—and to support certain U.S. costs associated with the program. Figure 3 shows the distribution of Warsaw Initiative funding. In addition, between 1994 and 2000, the United States provided to the partner states military assistance totaling over $165 million outside the framework of the Warsaw Initiative but complementary to its objectives. This funding was distributed through three Department of State and DOD programs that predate the Warsaw Initiative: the International Military Education and Training Program, Cooperative Threat Reduction Defense and Military Contacts Program, and the U.S. European Command’s Joint Contact Team Program. Although these programs were not designed to implement Warsaw Initiative objectives, they provide additional training to partner militaries, facilitate military contacts, and promote closer relationships with NATO. Appendix II provides details on these programs. U.S. and international officials and DOD-sponsored studies provide consistent and reinforcing views that Partnership and Warsaw Initiative programs have had important results and benefits. U.S. and NATO military commanders and other international officials have concluded that Warsaw Initiative and PfP programs have enhanced the capabilities of partner countries to participate effectively in NATO-led peace operations in the Balkans and have improved their ability to operate with NATO, thus making them better candidates for membership in the alliance. Warsaw Initiative funding has directly supported the creation of seven multinational peacekeeping units composed of NATO and partner state troops, some of which can or have been deployed to NATO-led peace operations in the Balkans. According to representatives of the three newest NATO member states, PfP and Warsaw Initiative assistance was invaluable to their preparation for joining NATO. Our cost analysis, along with the DOD-sponsored studies, reinforced these conclusions by showing that most Warsaw Initiative funding is associated with effective programs. U.S. and international officials noted that the growing contribution of Partner states’ troops and other assistance to NATO-led peacekeeping operations in the Balkans is the most significant indicator of the effectiveness of U.S. and NATO PfP programs. Between 1995 and 1999, NATO established three peacekeeping missions--two long-term and one short-term--with partner state military participation. The long-term missions are the Implementation Force (IFOR) in Bosnia and Herzegovina and Croatia, now known as SFOR, and KFOR in Kosovo, Macedonia, and Albania. In 1999, NATO also established the short-term Albania Force during the NATO bombing campaign against Serbia and Montenegro to assist and coordinate humanitarian efforts. As shown in figure 4, partner state’s contributions of troops to these missions rose from about 5,800 in 1996 to more than 12,800 in 1999 (11 percent and 15 percent of the total force, respectively). Twenty partner states contributed troops to one or more of these missions; 9 partners contributed a battalion or more. Moreover, NATO heads of government stated in the 1997 Madrid Declaration that without the experiences and assistance PfP had provided, the participation of partner forces in SFOR and IFOR would not have been as effective and efficient. Several SFOR and KFOR commanders and other NATO officers also noted that PfP activities, particularly exercises with NATO troops, were effective in preparing partner units to operate with NATO forces in an integrated command structure. One NATO official stated that every soldier a partner contributes to SFOR and KFOR means that NATO will not have to send an additional NATO or U.S. soldier to perform that function. According to DOD officials and documents, partner states also provided logistical assistance for the 1999 NATO bombing campaign against Serbia and Montenegro. The Czech Republic, Hungary, and Poland offered or provided basing rights for NATO aircraft. Along with Romania and Bulgaria, the three newest NATO members permitted allied aircraft to transit their airspace. Romania also helped NATO commanders direct the bombing campaign by providing NATO air controllers access to their NATO-compatible radar coverage system, which was procured through the Warsaw Initiative. U.S. officials and documents also indicate that Warsaw Initiative programs have helped create or support seven international peacekeeping units of battalion size or larger involving a total of 5 NATO countries (including the 2 former partners Poland and Hungary) and 16 partner countries. In 1996, the Congress declared that some of these units should receive appropriate support from the United States because they could make important contributions to European peace and security and could assist participant countries in preparing to assume the responsibilities of possible NATO membership. Two of these units have been deployed to the Balkans. See table 2 for details on the composition of these units and the U.S. assistance they have received. According to the NATO delegations of the three newest NATO members, PfP assistance, of which the United States was their largest donor through the Warsaw Initiative, was invaluable to their preparation for joining NATO. In particular, PfP exercises, equipment grants, and exposure to western military doctrine and practice boosted the ability of their forces to operate with NATO. Members from all three delegations affirmed the value of Partnership for Peace and Warsaw Initiative support in making them better candidates for NATO membership. In particular, they cited the exposure to NATO procedures, operations, and command structures they received through PfP exercises and programs; the professional and personal contacts that they developed to build a defense establishment better able to operate with NATO; and exercise experiences and equipment grants that improved the ability of their military forces to operate with NATO. The Czech delegation noted that its experiences in PfP activities helped expose the conflicts between the prerequisites for being a successful NATO ally and the practical difficulties of achieving those prerequisites, given their political and economic realities. For example, PfP activities helped them (1) reconcile the theoretical need for public support for accession at a time when political support within the government was relatively low and (2) plan a defense strategy and budget that met the demands of NATO interoperability goals and spending targets in a constrained budget environment. In 2000, DOD commissioned two studies to analyze the objectives, activities, and accomplishments of Warsaw Initiative programs and identify the lessons learned from program implementation and results. The studies, conducted by DFI International, reviewed programs that represented $409 million of the approximately $590 million in Warsaw Initiative funding. By combining the cost data that we collected from DOD and the State Department with the results of these studies, we determined that, in aggregate, about $367 million, or 90 percent, of the funding associated with the programs examined, was deemed effective or successful in promoting the objectives of the Warsaw Initiative. The first study, which focussed on the partner states of Central and Eastern Europe, showed that 91 percent of the resources associated with the programs examined were exceptionally or significantly effective. Figure 5 shows in greater detail the findings of this study. The second study, which focussed on the Central Asian and Caucasus partner states along with Russia, Ukraine, and Moldova, showed that 67 percent of the resources associated with the programs examined were successful or partially successful. Figure 6 shows in greater detail the findings of this study. In addition, both studies concluded that the Warsaw Initiative programs need to be better focused on U.S. strategic and regional objectives and to better take into account the capacities of the recipient states to absorb or apply the programs. For example, the second study noted that certain programs emphasizing NATO interoperability are not well suited for the Central Asian states. To prepare our overview of previous NATO accessions, we reviewed historical texts, and for the most recent accession, interviewed numerous U.S. and international officials and scholars. We also obtained U.S. and NATO documents on the accession process. To describe the cost and contents of Warsaw Initiative programs, we obtained comprehensive cost and program data by recipient country and year from DOD and State. We interviewed DOD and State Department country desk officers, program managers, and fiscal officers. We obtained historic budget and program documents from DOD and State. For information we were unable to obtain from DOD, we drew on our previous reports and workpapers on Partnership for Peace. For fiscal years 1994 and 1995, we extrapolated from planning documents to approximate actual obligations by recipient country. In cases where costs were not readily attributable to a specific country, we applied decision rules for country allocation generated in agreement with Defense officials. To assess the outcomes of Warsaw Initiative programs in support of Partnership for Peace, we synthesized information we obtained from numerous U.S. and international officials and scholars and historical information developed for our previous reviews of NATO-led peacekeeping operations in the Balkans. U.S. officials include cognizant officials from the Departments of Defense and State, members of the U.S. mission to NATO, and the Defense Security Cooperation Agency. We also interviewed and obtained documents from U.S. military officers at the U.S. European Command in Stuttgart, Germany, and from the U.S. National Military Representative to the Supreme Headquarters, Allied Powers Europe, in Mons, Belgium. International officials included members of the Czech, Hungarian, Swedish, and Polish delegations to NATO; NATO’s International Staff in Brussels, Belgium; and the director of the Partnership Coordination Cell in Mons, Belgium. We also reviewed the results of two studies the Department of Defense commissioned in 2000 to analyze the objectives, activities, and accomplishments of Warsaw Initiative programs and identify the lessons learned from program implementation and results. One study, “Assessing the Practical Impact of the Warsaw Initiative” examined 11 of the largest Defense and State-funded Warsaw Initiative programs in Albania, Bulgaria, the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Macedonia, Poland, Romania, Slovakia, and Slovenia. The other study, “Department of Defense Engagement of the New Independent States: Developing the Warsaw Initiative and Minimizing Risks in the Russia Relationship,” examined all DOD-sponsored Warsaw Initiative programs and other related DOD assistance activities in the nine New Independent States of Belarus, Russia, Ukraine, Moldova, Georgia, Kazakhstan, Kyrgyzstan, Turkmenistan, and Uzbekistan. This study also looked at DOD-sponsored security activities in three other New Independent States: the partner states of Armenia and Azerbaijan, which did not receive Warsaw Initiative assistance between 1994 and 2000; and Tajikistan, which is not a PfP member. Both studies evaluated the effectiveness of programs in terms of objectives associated with the Warsaw Initiative and the Partnership for Peace. The principal analysts of these studies briefed us on their methodology. This methodology included the development of measures of effectiveness and other metrics to assess the programs. To implement this methodology, the analysts collected information from DOD and State Department officials, including desk officers, Defense Security Cooperation Agency officials, and U.S. embassy personnel from partner countries. In addition to briefing us on its methodology and results, DFI International provided us with their detailed results on each program for each country, along with the specific criteria used in evaluating each program. The Department of State and DOD generally concurred with the report’s major findings, and State complimented GAO’s analysis and methodology. In addition, both DOD and State offered technical and editorial suggestions, which we have incorporated where appropriate. The State Department’s written comments are presented in appendix III; DOD provided oral comments. We are sending copies of this report to other interested congressional committees. We will also send copies to the Secretary of State and the Secretary of Defense. We will also make copies available to others upon request. Please contact me at (202) 512-8979 if you or your staff have any questions about this report. Key contributors to this assignment were F. James Shafer, Muriel J. Forster, B. Patrick Hickey, and Lynn Cothern. During fiscal years 1994 through 2000, the Department of Defense (DOD) supported numerous U.S. interoperability programs in Partnership for Peace (PfP) nations. Among the largest dollar programs are the following activities. SIMNET ($9.0 million): SIMNET is an exercise simulation network focused on peace support operations and scenarios. It is part of a U.S.-launched effort to link defense education institutions to increase the level of sophistication of military exercises and cooperative defense education. Commander in Chief Conferences and Other Expenses ($13.4 million): These two program categories combined provide funding to cover costs of hosting PfP-related conferences or sending U.S. or partner personnel to attend PfP-related events either in the United States or abroad. Command and Control (C4) Studies ($6.1 million): C4 studies analyze and document command and control interoperability of the subject country’s forces with U.S. forces for bilateral or multilateral contingencies. The purpose of the studies is to understand the country’s capabilities for NATO interoperability and identify useful recommendations for improvement. Transportation for Excess Defense Articles ($4.5 million): DOD sells or transfers articles no longer needed by U.S. armed forces to partnership countries. Warsaw Initiative funding can be used to support the costs of transporting this equipment. U.S. Army Corps of Engineers Exchanges and Assessments ($3.6 million): The Army Corps of Engineers conducts information exchanges and assessments in Partner countries on environmental and infrastructure topics, such as hazardous waste and material storage and transportation, disaster relief, and contamination control and prevention at military bases. Civil Military Emergency Planning ($3.4 million): This initiative aims to enhance the capabilities of partner states to work with each other, with neighboring nations, and with the international community to prepare for natural and technological disasters within any partner nation. Workshops and exercises are conducted in country by traveling contact teams or through exchanges of military personnel between units of the U.S. National Guard and comparable units of partner armed forces. Regional Airspace Initiative ($3.3 million): This program seeks to help develop civil and military airspace regimes that are fully interoperable with West European civilian airspace organizations. Warsaw Initiative funds are used to study partner requirements for building and operating an effective air sovereignty system. State Department foreign military financing funds may be used to procure the hardware necessary to implement the system. Navigational Aids Program ($3.2 million): This initiative supports assessments that document the interoperability of navigational aids and landing systems of partner states with western military forces under various contingencies. The assessments provide recommendations for modernization, with a focus on interoperability. Logistics Exchanges ($2.5 million): These exchanges consist of in-country workshops that focus on improving partners’ understanding of NATO’s collective logistics doctrine and logistics support requirements of NATO operations and of hosting NATO forces. National Military Command Centers ($1.4 million): This initiative aims to provide modern, centralized command center support to military and civil crises and disaster management. Its goal is to establish common command and control information systems throughout a region. Partnership for Peace Consortium ($1.1 million): This program primarily supports the annual conference costs of the Consortium, which includes representatives from 188 military academies, universities, and defense study institutions. Radar Interoperability and Lifecycle Upgrade Study ($1.1 million): More than 600 radar in 14 countries remained from the Warsaw Pact military structure. This study evaluates the utility and NATO compatibility of those radar for integration into the evolving airspace systems in the partner states. Defense Resource Planning Exchanges ($1.0 million): This program consists of small group workshops that provide an introduction to and explanation of the DOD’s resource management system to encourage partners to consider U.S. concepts that could be used to improve their resource management. National Guard ($1.0 million): In 1999, the Air National Guard supported the Partnership for Peace program largely through military-to-military contacts. This 1-year Warsaw Initiative funding supported National Guard participation in flood preparedness workshops, exchanges for engineering platoons, air exercise planning, field training, medical training, and other activities. The Departments of State and Defense provided additional military assistance to partner states totaling more than $165 million between 1994 and 2000. This funding was distributed through three programs with objectives that complement the objectives of the Partnership for Peace and the Warsaw Initiative. These programs are: The International Military Education and Training Program (IMET) ($72.4 million): This program provides military education and training on a grant basis to allied and friendly nations’ militaries to (1) increase their exposure to the proper role of the military in a democratic society, including human rights issues, and to U.S. professional military education; and (2) help to develop the capability to teach English. The State Department funds IMET through its Foreign Operations Appropriation, and DOD implements the program through the Defense Security Cooperation Agency. IMET complements or builds on Warsaw Initiative programs by offering more advanced training to partner state defense officials, including English language training, defense resource management, and instruction in doctrines common to the officials of NATO countries. The Cooperative Threat Reduction (CTR) Defense and Military Contacts Program ($40.4 million): The United States launched the Cooperative Threat Reduction initiative in 1991 to help the nations of the former Soviet Union eliminate, control, and prevent the proliferation of weapons of mass destruction. This program has assisted CTR efforts by supporting defense and military contacts between the United States and Belarus, Georgia, Kazakhstan, Kyrgyzstan, Moldova, Russia, Turkmenistan, Ukraine, and Uzbekistan (Belarus and Turkmenistan are currently ineligible for CTR funding). The objectives of these efforts complement the objectives of Partnership for Peace and the Warsaw Initiative by expanding contacts between defense establishments. The Joint Contact Team Program ($52.9 million): This program supports the deployment of small teams of military personnel to operate in a number of partner states and other countries within the U.S. European Command’s area of responsibility. The teams’ mission is to promote stability, democratization, and closer relationships with NATO. They exchange ideas and demonstrate operational methods to host nation military personnel and assist their militaries in the transition to democracies with free market economies. They do not conduct formal training or supply equipment. According to a U.S. European command document, 90 percent of the teams’ efforts support partner countries’ PfP programs.
After the collapse of the former Soviet Union and the Warsaw Pact in 1991, North Atlantic Treaty Organization (NATO) allies and the United States sought new ways to cooperate with the political and military leadership of their former adversaries. In January 1994, NATO established the Partnership for Peace to increase defense cooperation with former Warsaw Pact members and other former communist states in Central and Eastern Europe. Supported by the United States through the Warsaw Initiative, the Partnership plays a key role in developing the capabilities of those states and reforming their defense establishments. Given the key role the Partnership for Peace has played in the transformation of NATO's relationship with these states, the significant U.S. involvement and investment in this program through the Warsaw Initiative, and the impending debate on potential NATO members drawn from the Partnership, this report (1) provides an historic overview of previous NATO accessions, (2) describes the cost and content of the Warsaw Initiative, and (3) describes the results and benefits of Warsaw Initiative programs.
Title XIX of the Social Security Act establishes Medicaid as a joint federal- state program to finance health care for certain low-income, aged, or disabled individuals. Medicaid is an entitlement program, under which the federal government is obligated to pay its share of expenditures for covered services provided to eligible individuals under each state’s federally approved Medicaid plan. States operate their Medicaid programs by paying qualified health care providers for a range of covered services provided to eligible beneficiaries and then seeking reimbursement for the federal share of those payments. Although the federal government establishes broad federal requirements for the Medicaid program, states can elect to cover a range of optional populations and benefits. CMS, within HHS, is responsible for administering legislation and regulations affecting the Medicaid program, including disbursement of federal matching funds. CMS also provides guidelines, technical assistance, and periodic assessments of state Medicaid programs. Title XIX of the Social Security Act allows flexibility in the states’ Medicaid plans. Guidelines established by federal statutes, regulations, and policies allow each state some flexibility to (1) broaden its eligibility standards; (2) determine the type, amount, duration, and scope of services; (3) set the rate of payment for services; and (4) administer its own program, including enrollment of providers and beneficiaries, processing and monitoring of medical claims, payment of claims, and maintenance of fraud prevention programs. Controlled Substances Act The Controlled Substances Act of 1970 (CSA) established a classification structure for certain drugs and chemicals used in drug manufacturing. Controlled substances are classified into five schedules on the basis of their currently accepted medical use and potential for abuse and dependence. Schedule I drugs—including heroin, marijuana, and hallucinogens such as LSD—have a high potential for abuse, no currently accepted medical uses in treatment in the United States, and a lack of accepted safety for use under medical supervision. Schedule II drugs— including methylphenidate (Ritalin) and opiates such as morphine and oxycodone—have high potential for abuse and abuse may lead to severe psychological or physical dependence, but have currently accepted medical uses. Drugs on Schedules III through V have medical uses and successively lower potentials for abuse and dependence. Schedule III drugs include anabolic steroids, codeine, hydrocodone in combination with aspirin or acetaminophen, and some barbiturates. Schedule IV contains such drugs as the anti-anxiety medications diazepam (Valium) and alprazolam (Xanax). Schedule V includes preparations such as cough syrups with codeine. All drugs but those in Schedule I are legally available to the public with a prescription. CSA mandates that DEA establish a closed system of control for manufacturing, distributing, and dispensing controlled substances. Any person who manufactures, dispenses, imports, exports, or conducts research with controlled substances must register with DEA (unless exempt), keep track of all stocks of controlled substances, and maintain records to account for all controlled substances received, distributed, or otherwise disposed of. Although all registrants, including pharmacies, are required to maintain records of controlled substance transactions, only manufacturers and distributors are required to report their Schedule I and II drugs and Schedule III narcotics drug transactions, including sales to the retail level, to DEA. The data provided to DEA are available for use in investigations of illegal diversions. The act does not require pharmacies to report dispensing information at the patient level to DEA. We found tens of thousands of Medicaid beneficiaries and providers involved in potential fraudulent, wasteful, and abusive purchases of controlled substances through the Medicaid program in the selected states during fiscal years 2006 and 2007. The fraud, waste, and abuse activities that we examined in our analysis include the following: beneficiaries acquiring addictive medication from multiple medical practitioners, known as doctor shopping, to feed their habits, sell on the street, or both; medical practitioners and pharmacies barred from receiving federal funds nevertheless writing and filling Medicaid prescriptions; and prescriptions being paid for with Medicaid funds for dead beneficiaries and for prescriptions attributed to dead doctors by pharmacies. Approximately 65,000 Medicaid beneficiaries in the five states investigated visited six or more doctors to acquire prescriptions for the same type of controlled substances in the selected states during fiscal years 2006 and 2007. These individuals incurred approximately $63 million in Medicaid costs for these drugs, which were painkillers, sedatives, and stimulants. In some cases, beneficiaries may have justifiable reasons for receiving prescriptions from multiple medical practitioners, such as visiting specialists or several doctors in the same medical group. However, our analysis of Medicaid claims found that at least 400 of them visited 21 to 112 medical practitioners and up to 46 different pharmacies for the same controlled substances. In these situations, Medicaid beneficiaries were likely seeing several medical practitioners to support and disguise their addiction or to obtain drugs to fraudulently sell. Our analysis understates the number of instances and dollar amounts involved in the potential abuse related to multiple medical practitioners. First, the total we found does not include related costs associated with obtaining prescriptions, such as visits to the doctor’s office and emergency room. Second, the selected states did not identify the prescriber for many Medicaid claims submitted to CMS. Without such identification, we could not always identify and thus include the number of unique doctors for each beneficiary who received a prescription. Third, our analysis did not focus on all controlled substances, but instead targeted 10 types of the most frequently abused controlled substances, as shown in table 1. Table 2 shows how many beneficiaries received controlled substances and the number of medical practitioners who prescribed them the same type of drug. We found that 65 medical practitioners and pharmacies in the selected states had been barred from federal health care programs, excluded from these programs, or both, including Medicaid, when they wrote or filled Medicaid prescriptions for controlled substances during fiscal years 2006 and 2007. Nevertheless, Medicaid approved the claims at a cost of approximately $2.3 million. The offenses that led to their banishment from federal health programs included Medicaid fraud and illegal diversion of controlled substances. Our analysis understates the total number of excluded providers because the selected states either did not identify the prescribing medical practitioner for many Medicaid claims (i.e., the field was blank) or did not provide the taxpayer identification number for the practitioner, which was necessary to determine if a provider was banned. The banned providers we identified had been placed on one or both of the following exclusion lists, which Medicaid officials must check before paying for a prescription claim: the List of Excluded Individuals/Entities (LEIE), managed by HHS, and the Excluded Parties List System (EPLS), managed by GSA. The LEIE provides information on health care providers that are excluded from participation in Medicare, Medicaid, and other federal health care programs because of criminal convictions related to Medicare or state health programs or other major problems related to health care (e.g., patient abuse or neglect). The EPLS provides information on individuals or entities that are debarred, suspended, or otherwise excluded from participating in any other federal procurement or nonprocurement activity. Federal agencies can place individuals or entities on the GSA debarment list for a variety of reasons, including fraud, theft, bribery, and tax evasion. Our analysis of matching Medicaid claims in the selected states with SSA’s Death Master File (DMF) found that controlled substance prescription claims to over 1,800 beneficiaries were filled after they died. Even though the selected state programs assured us that beneficiaries were promptly removed from Medicaid following their deaths based on either SSA DMF matches or third-party information, these same state programs paid over $200,000 for controlled substances during fiscal years 2006 and 2007 for postdeath controlled substance prescription claims. In addition, our analysis also found that Medicaid paid about $500,000 in Medicaid claims based on controlled substance prescriptions “written” by over 1,200 doctors after they died. The extent to which these claims were paid because of fraud is not known. For example, in the course of our work, we found that certain nursing homes use long-term care pharmacies to fill prescriptions for drugs. One long-term care pharmacy dispensed controlled substances to over 50 beneficiaries after the dates of their deaths because the nursing homes did not notify the pharmacy of their deaths before delivery of the drugs. The nursing homes that received the controlled substances, which included morphine, Demerol, and Fentanyl, were not allowed to return them because, according to DEA officials, CSA does not permit such action. Officials at two selected states said that unused controlled substances at nursing homes represent a waste of Medicaid funds and also pose risk of diversion by nursing home staff. In fact, officials from one state said that the certain nursing homes dispose of these controlled substances by flushing them “down the toilet,” which also poses environmental risks to our water supply. In addition to performing the aggregate-level analysis discussed above, we also performed in-depth investigations for 25 cases of fraudulent, improper, and abusive actions related to the prescribing and dispensing of controlled substances through the Medicaid program in the selected states. Table 3 shows a breakdown of the types of cases that we identified from our analysis and confirmed through our investigations. In the course of our investigation, as we pursued leads produced from our data mining, we also found two other types of fraudulent, improper, and abusive actions, as shown in table 4. As noted in table 4, we are highlighting six examples where a doctor’s DEA registration did not authorize the doctor to prescribe a particular schedule of controlled substance. Under CSA, controlled substances are classified into five schedules based on the extent to which the drugs have an accepted medical use and their potential for abuse and degree of psychological or physical dependence. Schedule II includes what are considered by DEA to be the most addictive and abused drugs that legally can be prescribed. Schedule V, meanwhile, covers those that are least likely to cause such problems. Each provider must obtain a valid registration from DEA that reflects the schedule(s) of controlled substances the provider is authorized to store, dispense, administer, or prescribe. For example, if a physician wants the authority to prescribe Schedule II drugs, the physician must register and be granted authority by DEA to do so. As noted in table 4, we also found two cases where the physician prescribed controlled substances in excess of medical need. In one of these cases, our investigators found that the physician prescribed a controlled substance in a manner intended to circumvent Medicaid’s dosage limitations. In the other, the beneficiary sold excess controlled substances (in this case, painkillers). Table 5 summarizes 15 of the 25 cases we developed of fraudulent, improper, and abusive controlled substance activities in the Medicaid program. Appendix I provides details on the other 10 cases we examined. We have referred certain cases to DEA and the selected states for further criminal investigation. The following provides illustrative detailed information on four cases we investigated. Case 2: The beneficiary used the identity of an individual who was killed in 1980 to receive Medicaid benefits. According to a state Medicaid official, he originally applied for Medicaid assistance at a California county in January 2004. During the application process, the man provided a Social Security card to a county official. When the county verified the SSN with SSA, SSA responded that the SSN was not valid. The county enrolled the beneficiary into Medicaid provisionally for 90 days under the condition that the beneficiary resolve the SSN discrepancy with SSA within that time frame. Although the beneficiary never resolved the issue, he remained in the Medicaid program until April 2007. From 2004 through 2007, the Medicaid program paid for over $200,000 in medical services. This included at least $2,870 for controlled substances that he received from the pharmacies. We attempted to locate the beneficiary but could not locate him. Case 8: The physician prescribed controlled substances to the beneficiary after she died in February 2006. The physician stated that the beneficiary had been dying of a terminal disease and became unable to come into the office to be examined. The physician stated that in instances where a patient is compliant and needs pain medication, physicians will sometimes prescribe it without requiring an examination. A pharmacy eventually informed the physician that the patient had died and the patient’s spouse had continued to pick up her prescriptions for Methadone, Klonopin, and Xanax after her death. According to the pharmacy staff, the only reason they became aware of the situation was when an acquaintance of the spouse noticed him picking up prescriptions for a wife who had died months ago. The acquaintance informed the pharmacy staff of the situation. They subsequently contacted the prescribing physician. Since this incident, the pharmacy informed us that it has not filled another prescription for the deceased beneficiary. Case 9: A mother with a criminal history and Ritalin addiction used her child as a means to doctor shop for Ritalin and other similar controlled stimulants used to treat ADHD. Although the child received overlapping prescriptions of methylphenidate and amphetamine medications during a 2-year period and was banned (along with his mother) from at least three medical practices, the Illinois Medicaid Program never placed the beneficiary on a restricted recipient program. Such a move would have restricted the child to a single primary care, pharmacy, or both thus preventing him (and his mother) from doctor shopping. Over the course of 21 months, the Illinois Medicaid Program paid for 83 prescriptions of ADHD controlled stimulants for the beneficiary, which totaled approximately 90,000 mg and cost $6,600. Case 11: Claims indicated that a deceased physician “wrote” controlled substance prescriptions for several patients in the Houston area. Upon further analysis, we discovered that the actual prescriptions were signed by a physician assistant who once worked under the supervision of the deceased physician. The pharmacy neglected to update its records and continued filling prescriptions under the name of the deceased prescriber. The physician assistant has never been a DEA registrant. The physician assistant told us that the supervising physicians always signed prescriptions for controlled substances. After informing her that we had copies of several Medicaid prescriptions that she had signed for Vicodin and lorazepam, the physician assistant ended the interview. Although states are primarily responsible for the fight against Medicaid fraud and abuse, CMS is responsible for overseeing state fraud and abuse control activities. CMS has provided limited guidance to the states on how to improve their control measures to prevent fraud and abuse of controlled substances in the Medicaid program. Thus, for the five state programs we reviewed, we found different levels of fraud prevention controls. For example, the Omnibus Budget Reconciliation Act of 1990 encourages states to establish a Drug Utilization Review (DUR) Program. The main emphasis of the program is to promote patient safety through an increased review and awareness of prescribed drugs. States receive increased federal funding if they design and install a point-of-sale electronic prescription claims management system to interact with their Medicaid Management Information Systems (MMIS), each state’s Medicaid computer system. Each state was given considerable flexibility in how to identify prescription problems, such as therapeutic duplication and overprescribing by providers, and how to use MMIS to prevent such problems. The level of screening, if any, states perform varies because CMS does not set minimum requirements for the types of reviews or edits that are to be conducted on controlled substances. Thus, one state requires prior approval when ADHD treatments like Ritalin and Adderall are prescribed outside age limitations, while another state had no such controlled substance requirement at the time of our review. Recently, under the Deficit Reduction Act of 2005 (DRA), CMS is required to initiate the Medicaid Integrity Program (MIP) to combat Medicaid fraud, waste, and abuse. DRA requires CMS to enter intocontracts with Medicaid integrity contractors (MIC) to review provideractions, audit provider claims and identify overpayments, and cond uct provider education. To date, CMS has awarded umbrella contracts to several contractors to perform the functions outlined above. According to CMS, these contractors cover 40 states, 5 territories, and the Distri Columbia. CMS officials stated that CMS will award task orders to cover the rest of the country by the end of fiscal year 2009. CMS officials stated that MIC audits are currently under way in 19 states. CMS officials stated that most of the MIP reviews will focus on Medicaid providers and that the state Medicaid programs will handle beneficiary fraud. Because the Medicaid program covers a full range of health care services and the prescription costs associated with controlled substances is relatively small, the extent to which MICs focus on controlled substances is likely to be relatively minimal. In addition, CMS is required to provide effective support and assistance to states in their efforts to combat Medicaid provider fraud and abuse. investigations and prosecutions to design more effective preventive controls. Preventive controls: Fraud prevention is the most efficient and effective means to minimize fraud, waste, and abuse. Thus, controls that prevent fraudulent health care providers and individuals from entering the Medicaid program or submitting claims are the most important element in an effective fraud prevention program. Effective fraud prevention controls require that where appropriate, organizations enter into data-sharing arrangements with organizations to perform validation. System edit checks (i.e., built-in electronic controls) are also crucial in identifying and rejecting fraudulent enrollment applications, fraudulent claims, or both before payments are disbursed. Some of the preventive controls and their limitations that we observed at the selected states include the following. Federal debarment and exclusion: Federal regulation requires states to ensure that no payments are made for any items or services furnished, ordered, or prescribed by an individual or entity that has been debarred from federal contracts, excluded from Medicare and Medicaid programs, or both. Officials from all five selected states said that they do not screen prescribing providers or pharmacies against the federal debarment list, also known as the EPLS. Further, officials from four states said that when a pharmacy claim is received, they do not check to see if the prescribing provider was excluded by HHS OIG from participating in the Medicaid program. DEA registration: DEA, on behalf of the Attorney General of the United States, is the agency primarily responsible for enforcing CSA. Federal regulations require physicians and pharmacies to be registered with DEA for the controlled substance schedule(s) that they are authorized to prescribe or dispense. According to DEA officials, DEA can take administrative action against a provider who violates CSA or its implementing regulations, such as revoking DEA registration. Legal action against the provider is also a possibility. Although DEA’s registrant database is available for purchase by the public through the Department of Commerce’s National Technical Information Service, none of the five state Medicaid offices obtained the database at the time of our study to determine if physicians are authorized to prescribe particular controlled substances. Thus, the selected state Medicaid programs do not screen prescription claims for controlled substances to ensure that a health care provider is authorized to prescribe the particular drug(s). Further, DEA officials stated that pharmacies have corresponding responsibility to determine if a prescription is legitimate, which includes determining whether a health care provider is authorized to prescribe the particular schedule of controlled substance before filling a prescription. However, none of the pharmacy boards of the selected states said that this is a requirement they monitor. In fact, four pharmacy boards stated that the states only require that their pharmacists check to see if the DEA number on the prescription appears to be a valid DEA number, without verifying it with the DEA registration database. Duplicate enrollment: Medicaid officials in two states said that they did not have pre-enrollment checks in place to provide assurance that duplicate applications are not approved. One state does not even require the beneficiary to furnish an SSN when applying for the Medicaid program, thus making this fraud difficult to identify. In fact, during the period covered by our work, this state had 4,296 Medicaid beneficiaries who were enrolled without SSNs. These beneficiaries were approved for about 8,300 controlled substances claims, totaling $193,500. We did not investigate these beneficiaries for fraud or abuse. DUR: As mentioned earlier, states perform DURs and other controls during the prescription claims process to promote patient safety, reduce costs, and prevent fraud and abuse. The DURs include prospective screening and edits for potential inappropriate drug therapies, such as overutilization, drug-drug interaction, or therapeutic duplication. In addition, selected states also require health care providers to submit prior authorization forms for certain prescriptions of drugs because those medications have public health concerns, are considered high risk for fraud and abuse, or both. Each state has developed its DUR differently, and some of the differences that we saw from the selected states include the following: Officials from certain states said that they use the results of prospective screening (e.g., findings of overutilization, overlapping controlled substance prescriptions, etc.) as an automatic denial of the prescription. Officials from the other states generally use the prospective screening as more of an advisory tool for pharmacies, which pharmacies can override by entering a reason code. As such, the effectiveness of the tool for preventing fraud and abuse in these states is more limited. The types of drugs that require prior authorization vary greatly between the selected states. In states where it is used, health care providers may be required to obtain prior authorization if a specific brand name is prescribed (e.g., OxyContin) or if a dosage exceeds a predetermined amount for a therapeutic class of controlled substances (e.g., hypnotics, narcotics). Detection and monitoring: Even with effective preventive controls, there is risk that fraud and abuse will occur in Medicaid regarding controlled substances. States must continue their efforts to monitor the execution of the prescription program, including periodically matching their beneficiary files to third-party databases to determine continued eligibility, monitor controlled substance prescriptions to identify abuse, and make necessary corrective actions. Such actions include the following. Checking death files: After enrolling beneficiaries, Medicaid offices in the selected states generally did not periodically compare their information against death records. Specifically, two of the five selected states said that they did not obtain death records from SSA or the state vital statistics office to determine if a Medicaid beneficiary was still alive. Officials from two states said that Medicaid offices primarily rely on obituaries, providers, family members, or others to report the status change of the beneficiary. Increasing the use of the restricted recipient program: In the course of DURs or audits, the state Medicaid offices may identify beneficiaries who have abused, the Medicaid prescription drug program, defrauded the program, or both. In those cases, the selected states may place the beneficiaries into a restricted recipient program. Under this program, the state Medicaid office restricts the beneficiaries to one health care provider, one pharmacy, or both for receiving prescriptions. This program only applies to those beneficiaries in a fee-for-service arrangement since managed care organizations are responsible for determining the quality of care treatments for their enrollees. Thus, a significant portion of the Medicaid recipients for some of the selected states are not subject to this program. Fully utilizing the prescription drug monitoring program: Beginning in fiscal year 2002, Congress appropriated funding to the Department of Justice to support prescription drug monitoring programs (PDMP). These programs help prevent and detect the diversion and abuse of pharmaceutical controlled substances, particularly at the retail level where no other automated information collection system exists. States that have implemented PDMPs have the capability to collect and analyze data on filled and paid prescriptions more efficiently than those without such programs, where the collection of prescription information can require a time- consuming manual review of pharmacy files. If used properly, PDMPs are an effective way to identify and prevent diversion of the drugs by health care providers, pharmacies, and patients. The PDMPs at the selected states have the following limitations: For PDMPs to be useful, health care providers and pharmacies must use the data. Officials from the five selected states said that physician participation in the PDMP is not widespread and not required. In fact, one state did not have a Web-based PDMP; a health care provider has to put in a manual request to the agency to have a controlled substance report generated. Program officials at the selected states said that their systems were primarily used to respond to requests for controlled substance information on specific patients from medical practitioners. None of the selected states compared all the prescribers of controlled substances to the DEA authorization list to identify medical practitioners who are illegally prescribing drugs that they are not authorized to prescribe. Although the PDMPs generally capture the name and address of the patient, the controlled substance prescribed, the date of the prescription, and the identity of the prescriber, they generally do not capture the method of payment that the patient used. Thus, the system will not differentiate between prescriptions paid in cash and those paid using health insurance. One state restricts law enforcement access to the PDMP to only the state bureau of investigation. As such, local police and sheriff’s departments cannot access the data, which impedes their ability to conduct prescription drug diversion investigations. According to state officials, the limitation was enacted because of privacy concerns. No nationwide PDMP exists, and only 33 states had operational PDMPs as of June 2009. According to an official in one of the selected states, people would sometimes cross state borders to obtain prescription drugs in a state without a program. Investigations and prosecutions: Another element of a fraud prevention program is the aggressive investigation and prosecution of individuals who defraud the federal government. Prosecuting perpetrators serves as a preventive measure; it sends the message that the government will not tolerate individuals stealing money. Schemes identified through investigations and prosecution also can be used to improve the fraud prevention program. The MFCU serves as the single identifiable entity within a state government that investigates and prosecutes health care providers who defraud the Medicaid program. In the course of our investigation, however, we found several factors that may limit its effectiveness. Federal regulations generally limit MFCUs from pursuing beneficiary fraud. According to MFCU officials at one selected state, this limitation impedes investigations because agents cannot use the threat of prosecution as leverage to persuade beneficiaries to cooperate in criminal probes of Medicaid providers. In addition, the MFCU officials in this selected state said that this limitation restricts the agency’s ability to investigate organized crime related to controlled substances when the fraud is perpetrated by the beneficiaries. Federal regulations do not permit federal funding for MFCUs to engage in routine computer screening activities that are the usual monitoring function of the Medicaid agency. According to MFCU officials in one selected state, this issue has caused a strained working relationship with the state’s Medicaid OIG, on whom the MFCU relies for claims information. The MFCU official stated that based on fraud trends in other states, the state MFCU wanted the Medicaid OIG to provide claims information on providers who had similar trends in that state. The Medicaid OIG cited this prohibition on routine computer screening activities when refusing to provide these data. In addition, this MFCU official also stated that the state Medicaid office and its OIG did not promptly incorporate improvements that the MFCU suggested regarding preventing the abuse of controlled substances. DEA officials stated that although DEA monitors purchases of certain Schedule II and III controlled substances by pharmacies, it does not routinely receive information regarding written or dispensed controlled substance prescriptions. In states with PDMPs, a state agency collects and maintains data relating to dispensed controlled substance prescriptions. In the course of an investigation regarding the diversion or abuse of controlled substances, DEA may request information from a PDMP. In those states without PDMPs, DEA may obtain controlled substance prescription information from an individual pharmacy’s records during the course of an inspection or investigation. Fraud and abuse related to controlled substances paid for by Medicaid exist in the five selected states. Given that states are responsible for administering Medicaid and investigating and prosecuting any fraudulent activities, each state must set its own course to ensure the integrity of its Medicaid program, including its monitoring of the dispensing and use of controlled substances. CMS is also responsible for actively partnering with and providing guidance to the states to ensure that they succeed in minimizing fraud and abuse in the Medicaid program. To establish an effective fraud prevention system for the Medicaid program, we recommend that the Administrator of CMS evaluate our findings and consider issuing guidance to the state programs to provide assurance that claims processing systems prevent the processing of claims from providers and pharmacies debarred from federal contracts (i.e., on the EPLS), excluded from the Medicare and Medicaid programs (i.e., on the LEIE), or both; DUR and restricted recipient program requirements adequately identify and prevent doctor shopping and other abuses of controlled substances; effective claims processing system are in place to periodically identify both duplicate enrollments and deaths of Medicaid beneficiaries and to prevent the approval of claims when appropriate; and effective claims processing systems are in place to periodically identify deaths of Medicaid providers and prevent the approval of claims when appropriate. We provided a draft of this report to DEA and CMS for comment. DEA provided us technical comments by e-mail. CMS comments are reprinted in appendix II. CMS stated that it generally agrees with the four recommendations. CMS stated that it will continue to evaluate its programs and will work to develop methods to address the identified issues found in this report. CMS provided us two comments regarding our recommendations. First, CMS stated that we should be more specific as to the databases that the states should access in screening for debarred providers. Second, CMS also stated that we recommend that DEA make its registrant database available to the states without a fee. Third, CMS stated that information on deceased providers and beneficiaries could be provided by a feed from SSA. CMS also provided us two technical comments to the report. In response to CMS comment on the specificity of databases, we revised the recommendation to specify the two databases that should be used in screening claims: (1) the EPLS on federal debarments and (2) Medicare and Medicaid exclusions (i.e., the LEIE) maintained by HHS OIG. As stated in the report, both of these databases are required to be used by the states before they pay prescription claims. We did not recommend that states use the DEA registration database in the processing of Medicaid controlled substance claims, and thus we do not make any recommendations to DEA at this time. In response to CMS’s comment about screening for deceased providers and beneficiaries, we agree with CMS that SSA data can be used in determining the eligibility of Medicaid beneficiaries and providers. In developing its guidance to the states, we believe that CMS should consider SSA death records and other sources to identify deceased Medicaid providers and beneficiaries. We incorporated the technical comments made by DEA and CMS into the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. We will then send copies of this report to interested congressional committees and the Acting Administrators of CMS and DEA. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Table 5, in the main portion of the report, provides data on 15 detailed case studies. Table 6 provides details of the remaining 10 cases we selected. As with the 15 cases discussed in the body of this report, we also found fraudulent, improper, and abusive controlled substances activities in Medicaid for these 10 cases. In addition to the contact named above, the following individuals made major contributions to this report: Matthew Harris, Assistant Director; Matthew Valenta, Assistant Director; Erika Axelson; Paul Desaulniers; Eric Eskew; Dennis Fauber; Alberto Garza; Robert Graves; Barbara Lewis; Olivia Lopez; Steve Martin; Vicki McClure; Kevin Metcalfe; Gloria Proa; Chris Rodgers; Ramon Rodriguez; and Barry Shillito.
One significant cost to Medicaid is prescription drugs, which accounted for over $23 billion in fiscal year (FY) 2008, or about 7 percent of total Medicaid outlays. Many of these drugs are susceptible to abuse and include pain relievers and stimulants that are on the Drug Enforcement Administration's (DEA) Schedule of Controlled Substances. As part of the American Recovery and Reinvestment Act of 2009 (ARRA), the Medicaid program will receive about $87 billion in federal assistance based on a greater federal share of Medicaid spending. GAO was asked to determine (1) whether there are indications of fraud and abuse related to controlled substances paid for by Medicaid; (2) if so, examples of fraudulent, improper, and abusive activity; and (3) the effectiveness of internal controls that the federal government and selected states have in place to prevent fraud and abuse related to controlled substances. To meet these objectives, GAO analyzed Medicaid controlled substance claims for fraud and abuse indications for FY 2006 and 2007 from five selected states. GAO also interviewed federal and state officials and performed investigations. GAO found tens of thousands of Medicaid beneficiaries and providers involved in potential fraudulent purchases of controlled substances, abusive purchases of controlled substances, or both through the Medicaid program in California, Illinois, New York, North Carolina, and Texas. About 65,000 Medicaid beneficiaries in the five selected states acquired the same type of controlled substances from six or more different medical practitioners during fiscal years 2006 and 2007 with the majority of beneficiaries visiting from 6 to 10 medical practitioners. Such activities, known as doctor shopping, resulted in about $63 million in Medicaid payments and do not include medical costs (e.g., office visits) related to getting the prescriptions. In some cases, beneficiaries may have justifiable reasons for receiving prescriptions from multiple medical practitioners, such as visiting specialists or several doctors in the same medical group. However, GAO found that other beneficiaries obtained these drugs to support their addictions or to sell on the street. In addition, GAO found that Medicaid paid over $2 million in controlled substance prescriptions during fiscal years 2006 and 2007 that were written or filled by 65 medical practitioners and pharmacies barred, excluded, or both from federal health care programs, including Medicaid, for such offenses as illegally selling controlled substances. Finally, GAO found that according to Social Security Administration data, pharmacies filled controlled substance prescriptions of over 1,800 beneficiaries who were dead at that time. GAO performed in-depth investigations on 25 Medicaid cases and found fraudulent, improper, or abusive actions related to the prescribing and dispensing of controlled substances. These investigations uncovered other issues, such as doctors overprescribing medication and writing controlled substance prescriptions without having required DEA authorization. States are primarily responsible for the fight against Medicaid fraud; however, the selected states did not have a comprehensive fraud prevention framework to prevent fraud and abuse of controlled substances. CMS is responsible for overseeing state fraud and abuse control activities but has provided limited guidance to the states to prevent fraud and abuse of controlled substances.
Medical credit cards are private-label credit cards that may be used across a network of participating providers (such as dental offices or veterinary clinics) that have contractual relationships with the card company. Consumers typically learn about medical credit cards, or related products such as installment loans, from participating providers, who give information about the product and the available financing options. Consumers often can apply immediately at the provider’s office, sometimes with the assistance of the office staff, either online, by telephone, or using a printed application. The card company determines eligibility and, if the application is approved and an account is opened, is required to provide the consumer with the account-opening disclosures with the full terms and conditions, including fees, percentage rates, and rate terms. Once enrolled, consumers generally interact with the card company—rather than the participating provider—regarding use of the card, and they direct their payments to the card company. Medical credit cards can be used to pay for elective (planned, nonemergency) services, such as dental and orthodontic procedures, eye correction surgery, audiology care, cosmetic procedures, and hair removal or restoration, as well as for veterinary services. Some medical credit cards also may be used to pay for insurance copayments and deductibles or to finance medical care for people who do not have health insurance. The products generally are subject to the same state and federal statutory provisions as other lending products, which under federal law include but are not limited to the following: Truth in Lending Act and its implementing Regulation Z, which requires certain disclosures about a card’s terms and cost. The Credit Card Accountability Responsibility and Disclosure Act of 2009 amended the Truth in Lending Act to require certain disclosures about rates and fees on credit cards and prohibit certain practices (such as raising the rate on an existing balance). Federal Trade Commission Act, which prohibits unfair or deceptive acts or practices. Consumer Financial Protection Act of 2010, which prohibits unfair, deceptive, or abusive acts or practices by providers of consumer financial products or services and authorizes CFPB to take enforcement actions to prevent providers of consumer financial products or services from engaging in unfair, deceptive, or abusive acts or practices in connection with transactions with consumers involving consumer financial products or services. Fair Credit Reporting Act, which generally prohibits creditors from obtaining and using medical information in connection with determining eligibility for credit. However, Regulation FF allows creditors to use medical information “o determine, at the consumer’s request whether the consumer qualifies for a legally permissible special credit program or credit-related assistance program” that meets certain requirements. CFPB has primary supervisory authority for consumer financial protection laws at large banks and certain nondepository entities. Smaller banks (those with assets of $10 billion or less) are supervised by the federal banking regulators. CFPB has enforcement authority for violations of federal consumer financial laws, including violations of the Consumer Financial Protection Act of 2010 and the consumer financial protection laws, and rulemaking authority for these provisions. FTC has authority to enforce the Federal Trade Commission Act against most providers of financial services that are not banks, thrifts, and federal credit unions— which may include some medical credit card companies—as well as health care providers in certain contexts. CFPB and FTC share enforcement authority for nonbanks in accordance with a memorandum of understanding. State attorneys general have enforcement authority under state and federal law, including the authority to enforce the Consumer Financial Protection Act of 2010 and CFPB rulemakings. One company, CareCredit LLC (CareCredit), issues the majority of medical credit cards, according to available information, although no comprehensive data source on the industry exists. A registration statement (Form S-1) filed with the Securities and Exchange Commission in March 2014 by an affiliate of CareCredit reported that CareCredit had 4.4 million active cardholders and 177,000 health care and veterinary providers in its network and revenues of approximately $1.5 billion for calendar year 2013. Market participants with whom we spoke cited three other banks—Citibank, N.A. (Citibank), Wells Fargo Financial National Bank (Wells Fargo), and Comenity Capital Bank (Comenity)—as also among the largest issuers of medical credit cards. Representatives of these banks generally said that data on the number of cardholders and network providers were proprietary, but told us they believed that their share of the market was relatively low. Companies participating in the medical credit card marketplace play different roles (see table 1). Card issuers generally are responsible for marketing, origination, and underwriting of accounts, and management and collection of payments. CareCredit, Citibank, and Wells Fargo issue medical credit cards under their own names, while Comenity works with third-party companies (sometimes called aggregators) or retail networks that offer and market the product under their own names. For example, Comenity has a partnership with Springstone Patient Financing, a nonbank financial institution, to offer a product co-branded under Springstone’s name. With the direct oversight of Comenity, Springstone makes the product available through a network of participating providers, while Comenity is responsible for the financing and servicing of the credit card accounts. In addition to these and other established companies, our review identified at least 25 websites that marketed financing for health care procedures but did not always clearly identify the corporate entity or financial institution with which they were affiliated. In some cases, these websites appeared to serve largely a marketing function, collecting information that would be used to direct consumers elsewhere. Apart from participants in the medical credit card market, some companies play other roles related to financing procedures not covered by health insurance. For example, some firms assist medical offices in arranging and managing their own payment plans, or they purchase accounts receivable from such offices and assume collections responsibilities. Card companies contract with participating providers to offer financing products to consumers. Card companies enroll providers into their card networks by marketing to them through trade shows, direct marketing sales calls, trade journal advertisements, direct mail, and e-mail. In some cases, card companies paid trade organizations to endorse specific products and promote them to their members. For example, CareCredit reported that as of December 2013, it had relationships with 107 professional and other associations—such as the American Dental Association and American Animal Hospital Association—to endorse and promote CareCredit products to their members. The compensation for 63 of the associations was linked to enrollment of association members in the company’s card program and the volume of product transactions by association members. Dental care appears to be the procedure mostly commonly financed with medical credit cards. Among the companies we reviewed in depth, one reported that dental practices composed approximately 64 percent of its medical credit card business, veterinary 14 percent, cosmetic and dermatology 10 percent, vision 6 percent, audiology 3 percent, and other services such as weight loss treatments and procedures 4 percent. Representatives of another company, which entered the medical credit card market in 2003, told us that as of November 2013, dental and orthodontia made up about 85 percent of its medical credit card business, with the remaining 15 percent financing hair restoration, vision care, and audiology and veterinary services. Representatives from a third company, which entered the market in 2008, said that its largest market was for dental and audiology services, although its card also financed veterinary and vision services. Finally, representatives of the fourth company, which entered the medical credit card market in 2006, said it typically financed audiology, dental, hair removal, hair restoration, and skin care. Participating providers compensate card companies for their clients’ use of the cards through a transaction or administrative fee. When a consumer’s financing is approved, the card company typically pays the provider the full amount financed—minus the fee—within 24 to 72 hours after service is provided. Card companies generally told us that the exact amount of the fee was proprietary information and generally declined to provide it to us. They said it can vary based on such factors as the provider’s overall volume of business and the specific financing options (such as payment plan and term length) that the provider makes available to patients or clients. In exchange for this fee, the card company rather than the provider is responsible for billing and collection and generally assumes the risk of nonpayment by the borrower. Card companies typically provide participating providers with the informational marketing materials, applications, and disclosures needed to enroll consumers. Card companies also often train providers and staff on the products and enrollment process through webinars and telephone tutorials, and in-person. Some card companies told us that they also offer participating providers dedicated customer support. In December 2013, CFPB announced a consent order with GE Capital Retail Bank and its affiliate CareCredit. CFPB said it initiated an investigation of the company after receiving hundreds of complaints from consumers. The consent order alleged deceptive card enrollment processes, such as unclear communication about the terms of the deferred interest product; inadequate disclosures, whereby consumers did not always receive copies of the actual card agreement and did not understand the terms of the deferred interest product (discussed in the next section); and poorly trained staff at some health care provider offices, some of whom admitted they were confused by the product. As part of the settlement, GE Capital Retail Bank and CareCredit must refund up to $34.1 million to what CFPB described as potentially more than 1 million CareCredit consumers. The company also agreed to make several changes in its practices, including enhancements to consumer disclosures provided during the application process and on billing statements immediately prior to the expiration of the promotional period, enhanced training to providers on making the terms of the credit arrangement transparent to patients, and enhanced warnings to consumers about the expiration of the promotional period. (Representatives of CareCredit told us that it already had in place some of these practices.) In addition, for dental or audiology transactions over $1,000, consumers must apply directly to CareCredit, rather than through the health care provider, for credit approval if they use the card for such a transaction within 3 days of the application. In June 2013, GE Capital Retail Bank and CareCredit entered into a settlement agreement with the New York Attorney General, who had alleged deceptive enrollment practices and inadequate disclosure of product terms and conditions. This settlement required CareCredit to provide a 3-day “cooling-off” period, which prohibits certain charges of $1,000 or more on a CareCredit card within 3 days of an in-office application and provides New York consumers an opportunity to consider the card’s terms and the treatment plan. The settlement also limited what the health care provider can charge in advance and required clearer disclosure of the interest rates associated with deferred-interest products (discussed below). It also required CareCredit to call consumers within 72 hours of the submission of a CareCredit application with a same-day charge to confirm the account opening with the consumers and to inform them of certain account terms, a practice that CareCredit has said it adopted nationally. In addition, the settlement required CareCredit to pay for and establish an appeals fund that resulted in cardholder refunds or adjustments of approximately $175,000. The medical credit cards we reviewed in depth resembled conventional credit cards (offered a revolving line of credit with an established credit limit) and offered some form of promotional financing (special terms and conditions valid for a specified period). Most common was a deferred interest option, which 85 percent of CareCredit cardholders had chosen, according to the settlement agreement with CFPB. Deferred interest plans start accruing interest from the initial purchase date based on the stated annual percentage rate (APR). If the entire promotional balance is paid off during the specified promotional period (generally 6, 12, 18, or 24 months, depending on the plan), the accrued interest is waived. But if the balance is not paid in full within the specified promotional period, the accrued interest is assessed to the account. As seen in table 2, the APR for consumers who did not pay off the purchase amount before the promotional period expired varied depending on the product, but our analysis found that most cardholders had deferred interest products with an APR of 26.99 percent or more. One company’s APR was 26.99 percent, a second ranged from 26.99 to 28.99 percent, and a third ranged from 14.99 to 26.99 percent. A fourth company offered a variable APR of 9.99 percent that became effective October 22, 2013; before that it had been 27.99 percent. The companies declined to provide the proportion of consumers who did not pay off the full balance during the promotional period, stating that this information was proprietary. Although less common, major card companies we reviewed also offered a promotional monthly fixed-payment option, which charged a set interest rate (an APR from 0 to 17.99 percent) during a specified period (from 12 to 60 months). The number of cardholders who do not participate in promotional financing appears to be small; for example, CareCredit told us that the vast majority of the credit extended through its medical credit card was promotional financing. The standard APR without promotional financing ranged from 9.99 to 28.99 percent. Representatives of the medical credit card companies with whom we spoke said their customers typically were prime borrowers—that is, with credit scores that put them at low risk of default—and that the interest rates and other terms of the loan did not vary based on the cardholder’s credit profile. As noted earlier, in addition to the companies listed above, we identified 25 websites that marketed products—either revolving lines of credit or installment loans—designed to finance services not covered by health insurance. These websites did not always provide comprehensive information about the terms and conditions of the products, which generally appeared to be marketed by smaller companies, and we did not verify the product information. However, some of these websites appeared to market financing terms, such as deferred interest, similar to those of the banks listed above. In addition, two companies marketed products that charged interest, but refunded that interest in the form of a rebate check if the loan was paid in full within 12 months. Some of these websites marketed financing for persons with a wide range of credit histories, including those with marginal or poor credit. We provided a draft of this report to CFPB and FTC. CFPB provided technical comments that we incorporated as appropriate. We also provided selected relevant portions of the draft for technical review to CareCredit, LLC; Citibank, N.A.; Comenity Capital Bank; Springstone Patient Financing; and Wells Fargo Financial National Bank, and incorporated their technical comments as appropriate. We are sending copies of this report to the Director of CFPB and the Chairwoman of FTC. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. Please contact me at (202) 512-8678 or brownbarnesc@gao.gov if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. This report examines the (1) participants and (2) products in the marketplace for medical credit cards. For the purposes of this report, we used “medical credit cards” to refer collectively to financial products— including revolving credit lines and installment loans—that are designed specifically to finance health care services not covered by health insurance. This report provides an overview of this industry, but does not necessarily describe all its products and participants. To address these objectives, we conducted a literature review of articles using the Proquest, PubMed, and ABI/Inform databases using search terms such as “medical credit card,” “health care credit card,” “healthcare financing,” and “medical financing” for the purpose of obtaining background and context surrounding the products. We also searched the Internet using these same search terms and also “medical procedure financing.” We reviewed publications from, or interviewed representatives of, organizations that study the credit card industry, such as Argus Information and Advisory Services, The Nilson Report, and CreditCards.com. We interviewed representatives of companies that serve as lenders or marketers of medical credit cards, including AA Medical Finance, Advance Care, American HealthCare Lending, Citibank N.A., Comenity Capital Bank, Fundmydr.com, GE Capital Retail Bank’s CareCredit LLC, HELPcard, JPMorgan Chase & Co., MedicalFinancing.com, MyMedicalFunding, Springstone Patient Financing, United Medical Credit, and Wells Fargo Financial National Bank, as well as CarePayment, which provides billing and patient financing services. These companies were selected either because they were identified by industry and government representatives as key players in this marketplace or because they represented a variety of different sizes, roles, and products. We also corresponded with the Financial Services Roundtable, a trade organization representing financial services companies. We also interviewed representatives of two federal agencies, the Bureau of Consumer Financial Protection (known as CFPB) and the Federal Trade Commission. We conducted two group interviews, coordinated by the National Association of Attorneys General, that together included staff from the office of the attorney general of nine states that chose to participate (Indiana, Louisiana, Maryland, Massachusetts, Nebraska, Nevada, New York, Ohio, and Tennessee), as well as a separate interview with the Minnesota Office of the Attorney General, which had been examining the medical credit card industry. We also interviewed or received written responses from organizations representing health care providers (American Dental Association and American Society of Plastic Surgeons) and representatives of consumer interests (Consumers Union, National Consumer Law Center, and Community Health Advisors). Our review of reports and data from organizations that study the credit card industry broadly, and interviews with industry representatives, indicated that no comprehensive source of information existed on the medical credit card industry and the market share of its participants. However, based on the testimonial and documentary information we received from the sources above, we identified four companies that appeared to be among the largest market participants—CareCredit LLC, Citibank, N.A., Comenity Capital Bank, and Wells Fargo Financial National Bank—which we selected for greater examination. In addition to our interviews with company representatives, we gathered and analyzed the terms and conditions of these companies’ medical credit card products, application forms, and informational and marketing materials when publicly available or provided by the company. One card company also provided us with a generic copy of its standard contract with participating providers. We also reviewed, where applicable and available, the companies’ public filings with the Securities and Exchange Commission. We found that only the filing for an affiliate of CareCredit reported information specific to medical credit cards. To assess the reliability of that public filing, we reviewed the data for completeness and consistency and found that they were reliable for the purposes of describing characteristics of the CareCredit product. Apart from the four companies we examined in depth, we collected and reviewed publicly available product information from the websites of 25 other companies that provide medical credit cards and related products. In some instances, we contacted the companies to confirm or clarify certain aspects of the products. We generally did not independently verify this information, but we did use it for context and to provide a broader picture of available products and key features. We reviewed applicable federal laws and regulations related to medical credit cards and to lending products more generally, including Regulation Z (which implements the Truth in Lending Act), the Consumer Financial Protection Act of 2010, the Federal Trade Commission Act, the Credit Card Accountability Responsibility and Disclosure Act of 2009, and the Fair Credit Reporting Act. We also reviewed the settlement agreements and related materials resulting from two enforcement actions against CareCredit, one by CFPB and one by the New York State Office of the Attorney General. We conducted this performance audit from July 2013 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact name above, Jason Bromberg (Assistant Director), Rhonda Rose (Analyst-in-Charge), Bethany Benitez, Pamela Davidson, Josephine Perez, Barbara Roesmann, and Jena Sinkfield made key contributions to this report.
Medical credit cards and related products (such as installment loans) are offered by financial institutions through participating providers to pay for services not covered by health insurance, such as dental and cosmetic procedures, or for veterinary care. Medical credit cards received increased attention after enforcement actions in 2013 against GE Capital Retail Bank in relation to its CareCredit product. GAO was asked to review the marketplace for medical credit cards and related products. This report describes the participants and products in this marketplace. To address these objectives, GAO conducted a literature review and reviewed websites, product terms and conditions, and other publicly available information. GAO also interviewed staff of, and collected documents from, CFPB, 14 card companies representing a mix of size and type, and organizations that represented participating providers, financial institutions, and consumer interests and were familiar with the medical credit card marketplace. GAO also reviewed settlement agreements between CareCredit and CFPB and the New York Attorney General. Multiple entities offer medical credit cards, but according to market participants with whom GAO spoke, CareCredit LLC issues the majority of medical credit cards. In 2013, the company reported 4.4 million cardholders and 177,000 participating providers in its network, the majority of which were dental offices. Several other financial institutions also issue medical credit cards, usually offering their own branded product, but sometimes providing financing for retail networks or third-party companies that offer and market cards under their own names (see table). The marketplace for financing services not covered by health insurance also includes companies that assist providers in offering their own payment plans and websites that largely serve a marketing function by directing consumers to others' products. In 2013, GE Capital Retail Bank and its affiliate CareCredit entered into separate agreements with the New York Attorney General and the Bureau of Consumer Financial Protection (known as CFPB), which had alleged deceptive card enrollment processes, including failure to provide disclosures and inaccurate information given by participating providers to consumers. Both settlements required CareCredit to make several changes to its practices, such as enhancing consumer disclosures. Medical credit cards from large banks offer a revolving line of credit with an established credit limit—akin to a conventional credit card—with some form of promotional financing (special terms and conditions, which are valid for a specified period of time). The most commonly used financing option is deferred interest, with no interest charged for a promotional period but interest charged retroactively if the balance is not paid in full before the end of the promotional period, usually 6 to 24 months. Among large banks GAO reviewed, as of May 2014, the most commonly used products had an annual percentage rate (APR) of 26.99 percent or more. Alternatively, these banks also offered revolving credit with fixed monthly payments, with an APR of 0 to 17.99 percent. Installment loans or products targeted at consumers with poor credit histories were offered by certain other market participants. GAO makes no recommendations in this report.
Agencies can use a variety of contract types to acquire products and services. Cost-reimbursement contracts are suitable only when uncertainties in the scope of work or cost of services prevent the use of contract types in which prices are fixed, known as fixed-price contracts. A contractor may receive a fixed or base fee on a contract regardless of performance, and also may earn an incentive, which may be used separately or jointly. Such incentive-type contracts, of which award fee contracts are an example, reward contractors with fees based on performance. Award fee contract types are to be used when it is not feasible to devise predetermined objective incentive targets based on cost, technical performance, or schedule, with the focus instead being on subjective criteria, such as project management. In fiscal year 2008, cost- reimbursement contracts made up 94 percent of contracts using award fees. As shown in figure 1, since we issued our report on DOD’s use of award fees, DOD’s use of cost-plus-award-fee (CPAF) contracts has decreased while its use of other cost-type contracts has increased or stayed the same. Figure 2 shows that use of CPAF contracts as a proportion of overall cost- plus contracts varies greatly at the other agencies we reviewed. Contract type is based on a risk assessment by the contractor and the government. The objective is to negotiate a contract type and price (or estimated cost and fee) that will result in reasonable contractor risk and provide the contractor with the greatest incentive for efficient and economical performance. In advance of contract award, outcomes can be identifiable and measurable, identifiable but not measurable, or unable to be identified. The FAR states that an award fee should be used when the work to be performed is such that it is neither feasible nor effective to devise predetermined objective incentive targets applicable to cost, technical performance, or schedule. Alternatively, an incentive fee contract should be used when cost and performance targets are objective and can be predetermined, allowing a formula to adjust the negotiated fee based on variations relative to the targets. These incentive types also can be combined into a multiple-incentive fee contract, which combines objectively and subjectively measured criteria to reward contractor performance while maximizing the government’s ability to use performance metrics that are predetermined, measurable, and targete desired contract outcomes. Agencies, when using multiple-incentive contracts, generally split the available award money into categories that evaluate the contractor’s cost and performance using a combination of objective formulas and subjective judgments to evaluate performance tasks stated in the contract. Appendix II provides definitions contract types as well as terms associated with award fees. Our previous work reviewing the use of award and incentive fees found that programs often paid fees without holding contractors accountable for achieving desired acquisition outcomes, such as meet cost and schedule goals and delivering desired capabilities. Over 5 percent of DOD programs reviewed provided contractors multiple opportunities to earn an estimated $669 million in fees not awarded in previous periods. We also reported that DOD programs regularly paid contractors a significant portion of the available fee for what award fee plans describe as “acceptable, average, expected, good, or satisfactory”performance when federal acquisition regulations and military service guidance state that the purpose of these fees is to motivate excellent performance. To improve the use of award fee contracts we made sev recommendations including suggesting that DOD move toward more outcome-based award fee criteria, ensure that award fees are paid only for above satisfactory performance, and define when rollover is appropria We also recommended that DOD develop a mechanism for capturin award fee d contracts. te. ata for use in evaluating the effectiveness of award fee At NASA, we reported that guidance on the use of CPAF contracts provides criteria for improving the effectiveness of award fees. For example, the guidance emphasizes outcome factors that are good indicators of success in achieving desired results, cautions against using numerous evaluation factors, prohibits rollover of unearned fee, and encourages evaluating the costs and benefits of such contracts before using this contract type. However, we found that NASA did not always follow the preferred approach laid out in its guidance. For example, som evaluation criteria contained input or process factors, such as program planning and organizational management rather than focusing on outcomes or results. Moreover, some contracts included numerous supporting subfactors that may dilute emphasis on any specific criteria. Although the FAR and NASA guidance require considering the costs and e benefits of choosing a CPAF contract, NASA did not perform such analyses. In some cases, we found a significant disconnect between program results and fees paid. In 2007, OMB issued governmentwide guidance highlighting preferred practices and directing agencies to review and update their acquisition policies. That guidance included four fundamental practices: (1) linkingaward fees to acquisition outcomes, (2) limiting the use of rollover, (3) emphasizing excellent performance, and (4) prohibiting payments for unsatisfactory performance. DOD issued new policies on the prope of award fees, while NASA reemphasized its existing guidance. The policies at both agencies reflect these four elements in the OMB guidanc DOE, DHS, and HHS vary in the extent to which they have agencywide guidance, generally allowing operational divisions to supplement award fee guidance. However, existing guidance is not always consistent within agencies or consistent with practices outlined by OMB. e. DOD and NASA ha actions that influenced t d begun to address. Table 1 provides a timeline of he guidance and that have followed its issuance. In March 2006, DOD issued guidance on using award fees that was in direct response to our recommendations. This guidance stated that it is imperative that award fees are linked to desired outcomes such as discrete events or milestones. Such milestones include design reviews and system demonstrations for weapons systems. The guidance also stated that while award fee contracts should be structured to motivate excellent contractor performance, award fees must be commensurate with contractor performance over a range from satisfactory to excellent performance. The guidance recognized that performance that is less than satisfactory is not entitled to any award fee and that satisfactory performance should earn considerably less than excellent performance, otherwise the motiva achieve excellence is negated. Further, the guidance established tha practice of rolling over unearned award fees from one period to another should be limited to exceptional circumstances. The guidance also established the Award and Incentive Fee Co mmunity of Practice to facilitate discussion of strategies across the acquisition workforce and serve as a repository for policy information, related training courses, and examples of good award fee arrangements. In October 2006, Congress required DOD to develop specific guidance linking award and incentive fees to acquisition outcomes. The requirement specified that among other elements, the guidance should define acquisition outcomes in terms of program cost, schedule, and performance and provide guidance on determining ‘‘excellent’’ or ‘‘superior’’ performance. Additionally, the guidance was to prohibit the payment of award fees for performance that is judged to be below satisfactory or does not meet the basic requirements of the contract. The guidance was also to establish standards for determining the percentage of the available award fee, if any, for various levels of performance ranging from satisfactory to excellent. Further, DOD was to provide specific guidance on the circumstances, if any, in which it may be appropriate to roll over award fees that are not earned in one award fee period to a subsequent award fee period or periods and include performance measures to evaluate the effectiveness of award and incentive fees as a tool for improving contractor performance and achieving desired program outcomes. Finally, DOD’s guidance was to provide mechanisms for sharing proven incentive strategies for the acquisition of different types of products and services. on In April 2007, DOD responded by providing additional guidance that reemphasized that cost-plus award fee contracts are suitable for use when it is neither feasible nor effective to devise objective targets applicable to cost, technical performance, or schedule. Recognizing that most DOD contracts contain objective criteria, the guidance clarified that in insta where objective criteria exist and the Contracting Officer and Program Manager wish to also evaluate and incentivize subjective elements of performance, the most appropriate contract type would be a multiple incentive type contract containing both incentive and award fee criteria. Additionally, the guidance defined the levels of performance used to evaluate contractors and the corresponding percentage of fee that could be earned. Table 2 illustrates the scale as recommended by DOD. To address the use of award fees and specific weaknesses previously identified by its Inspector General in the early 1990s, NASA issued guidance in its FAR Supplement and Award Fee Contracting Guide. Previously identified weaknesses included the awarding of excessive fees with limited emphasis on acquisition outcomes (end results, product performance, and cost control), rollover of unearned fee, use of base fee, and the failure to use both positive and negative incentives. NASA updated its award fee guide in 1994, 1997, and 2001 to explain and elaborate on award fee policy. The 2001 revision also reflects the FAR’s additional emphasis on using performance-based contracts. NASA’s Award Fee Contracting Guide provides a tool to contracting officers with guidance on when to use an award fee contract, the risk involved with various contract types, and how to combine award fees with other contract types. Additionally, NASA’s guidance addresses award fee practices that are designed to produce positive results. For example, in describing evaluation factors to be used in award fee determinations, NASA established a preference to use outcome factors when feasible since they are better indicators of success relative to the desired result. Additionally, the guidance provides the scale displayed in table 3 with which to evaluate contractor performance and emphasizes that no award fee will be paid to contractors that perform unsatisfactorily. (percentage) Of exceptional merit; exemplary performance in a timely, efficient and economical manner; very minor (if any) deficiencies with no adverse effect on overall performance. Very effective performance, fully responsive to contract requirements; contract requirements accomplished in a timely, efficient and economical manner for the most part; only minor deficiencies. Effective performance; fully responsive to contract requirements; reportable deficiencies, but with little identifiable effect on overall performance. Meets or slightly exceeds mi deficiencies with identifiab nimum acceptable standards; adequate results; reportable le, but not substantial, effects on overall performance. Does not meet minimum acceptable standards in one or more areas; remedial action required in one or more areas; deficiencies in one or more areas which adversely affect overall performance. DOE, HHS, and DHS varied in the extent to which they had existing guidance specific to award fees and the extent to which that guidance was consistent acquisition officers and senior procurement executives in December 2007, with OMB guidance. While OMB’s guidance was sent to chief many officials with whom we met across various levels at several agencies its within these departments were unaware of the OMB guidance memo or contents. DOE has supplemental guidance to the FAR that outlines how award fee should be considered in contracts for operations and management and separately for lab contracts. Recognizing the complexity of this guid DOE created implementing guidance specific to management and operations contracts in September 2008 that links performance fees to acquisition outcomes and limits the use of rollover. Specifically, the guidance states that fee must relate to clearly defined performance objectives and performance measures. Where feasible, the performance objectives and measures should be expressed as desired results or outcomes. It also states that following these principles will increase the probability that the contractor will only receive performance fee for government-negotiated acquisition outcomes. Additionally, the departmental guidance states that rollover should be used in limited circumstances where convincing evidence of the cost and benefit are considered by a senior procurement executive. The guidance acknowledges that allowing the contractor a second chance at earning th same fee could undermine the incentive in the original award fee period. In response to this concern, the guidance states that if rollover is used, t he contractor can only earn a portion of the unearned fee based on how the contractor came to delivering the originally negotiated performance (for example, a contractor failing to reach a milestone by a year must earn significantly less than a contractor that fails by a week) and how much DOE still desires the originally negotiated performance, some other performance, or both. While linking fee to acquisition outcomes and limiting the use of rollover are in line with OMB’s guidance, several other elements of DOE’s departmental guidance are not. For example, both DOE’s supplemental acquisition policy and the implementing guidance establish CPAF contracts as generally the appropriate type of contract for management and operations. The OMB guidance states that in using an award fee contract, contracting officers should conduct and document risk benefit analyses that support use of the contract type. As part of this analysis, they are to conduct a risk assessment and ensure that incentive strategies are consistent with the level of risk assumed by the contractor and motivate the contractor by balancing awards with negative consequences. Also, according to both the OMB memo and the FAR, contracting officers should determine whether administrative costs associated with managing the incentive fee are outweighed by the expected benefits. Further, agencies should ensure sufficient staff are available to properly structure and monitor the contract. These factors require a case by case consideration before using an award fee contract which contradicts DOE guidance that suggests the general application certain type of contract for work of a particular type. Additionally, the DOE departmental guidance does not clearly def standards of performance for each rating category (e.g., satisfactory, above satisfactory, excellent) or the percentage of fee the contractor should be paid for each of these rating categories as stated in OMB’s E guidance, as do DOD and NASA. Instead, some divisions of DO own (including the Office of Science and NNSA) have developed their standards and methods of evaluation. These standards varied among contracts at the sites. For example, at a multimission site, some contracts prohibited payment of fee to contractors that did not perform ine the satisfactorily while others allowed a reduced fee for that level of performance. DOE contracting officials at the division level told us that while they appreciate the flexibility allowed in coming up with their own evaluation criteria, they could benefit from additional departmental guidance on performing the evaluations and establishing standards. DHS provides guidance on award fees in its acquisition manual, but does not fully address the issues in the OMB guidance. The DHS guidance requires award fee plans to include criteria related (at a minimum) to schedule, and performance. Further, it establishes that award fees are to be earned for successful outcomes and no award fee may be earned against cost, schedule or performance criteria that are ranked below “successful” or “satisfactory” during an award-fee evaluation of contracto performance. However, the manual does not describe standards or definitions for determining various levels of performance. Additionally, it does not include any limitation on the use of rollover. DHS procurement officials noted that there is a need for better guidance on the use of award fees. They also noted, however, that the extent of that need will largely be determined by the pending interim FAR rule on award fees. In response to revised guidance, some DOD components reduced costs and improved management of award fee contracts by limiting the use of rollovers and by tying fees more directly to acquisition outcomes. Potential changes at NASA —such as documented cost-benefit analyses and adding the management of award fee contracts as an area of review— are too recent for their full effects to be judged. At DOE, DHS, and HHS, individual contracting offices have developed their own approaches to implementing award fee contracts which are not always consistent with the principles in the OMB guidance or between offices within these departments. Guidance from DOD, NASA, DOE, and OMB has stated that allowing contractors a second chance at unearned fees should be limited to exceptional circumstances and should require approval at a high level. Allowing contractors an opportunity to obtain previously unearned fee reduces the motivation of the incentive in the original award fee period. Three of the 5 agencies have established policies that either prohibit or limit the use of rollover. However, before changes in policies and guidance that established these limits, the use of rollover was prevalent in DOD contracts. In 2005, we reported that for DOD award-fee contracts active for fiscal years 1999 through 2003, an estimated $669 million was rolled over across all evaluation periods. In almost all of the 50 DOD contracts we reviewed, rollover is now the exception and not the rule. While in 2005, we identified that 52 percent of all DOD programs rolled over fee, only 4 percent of the programs in our sample continue this practice. We reviewed active contracts from our 2005 sample and found that the limitation on the use of rollover will save DOD more than an estimated $450 million on 8 programs from April 2006 through October 2010. In some cases, entire DOD contracting commands have strictly limited the use of rollover. One Air Force contracting officer told us that even if he wanted to roll over a portion of the unearned fee, the fee determining official (FDO) would not allow it. This change in policy has required a change in culture on both the government’s and contractor’s part. In our review of an Air Force contract for a satellite program, we found that despite receiving 0 percent of the award fee for unsatisfactory performance, a contractor sent the program a written request to include the $10 million in unearned fee in the next period. The program denied this request and has not allowed any rollover. The program ceased rolling over unearned fees to subsequent award fee periods to conform to the new policy and will save an estimated $20 million. While our analysis of DOD contracts has demonstrated the savings that can be achieved by not rolling over unearned fee, we found contracts at DOD, DOE, HHS, and DHS that continue to allow contractors second chances at unearned fees. DOD award fee letters issued as recently as January 2009 indicate that rollover is still being used. For example, in the most recent evaluation of a DOD contract for mobile radios, the program continued to recommend that funds be rolled over to subsequent periods after over $2 million in rollover fees had already been earned by the contractor. Several contracts we reviewed at other agencies allowed for 100 percent of the unearned fee to be earned in later periods. For example, in a DHS Transportation Security Administration contract for personnel services we found that a contractor that scored above average and received 86 percent of the fee in a particular period was allowed a second chance at 100 percent of the remaining fee in the next period. Additionally, an HHS Centers for Medicare and Medicaid Services award fee plan that was used on several contracts we reviewed stated that the unearned fee is placed in a separate award fee pool to be used at the discretion of the FDO. The FDO can roll over up to 100 percent of the unearned fee as long as the money is spent during the same contract year. To ensure that award fees are being used to motivate contractor performance, guidance, where available, from each of the agencies we reviewed states that award fees should be linked to acquisition outcomes such as cost, schedule, and performance. OMB’s guidance states that incentive fee contracts, which include award fee contracts, should be used to achieve specific performance objectives established prior to contract award, such as delivering products and services on time, within cost goals, and with promised performance outcomes. OMB’s guidance also states that awards must be tied to demonstrated results, as opposed to effort, in meeting or exceeding specified performance standards. Contracting officers and program managers across all five agencies we reviewed stated that a successful award fee contract should maintain a portion of fee based on a subjective evaluation of how the contractor identified and responded to issues and challenges and how it mitigated risks, but could benefit from objective targets that equate to a specific amount of fee. In August 2008, NASA’s Deputy Director noted that requirements that do not support desired outcomes should not be included in contracts and that award fees should generally only be used in complex contracts. NASA now requires that award fee contracts are accompanied by a documented cost-benefit analysis, although the requirement is too new to judge its effect. Some contracts we reviewed ensure that award fee evaluations are accurately measuring contractor performance by incorporating objective criteria to serve as inputs for the evaluation. Other contracts combined the subjective criteria of an award fee contract with the objective targets of an incentive fee contract to ensure that specific metrics are evaluated on their actual outcomes. These subjective criteria are often described as program management, cost management, or communication and allow for a broader evaluation of contractor performance. Officials that supported the use of subjective criteria noted that they must be accompanied by definitions and measurements of their own. The combination of objective and subjective measurements describes a multiple incentive contract that incorporates elements of both award and incentive fee contracts. While officials at several agencies told us that this is the preferred structure for incentivizing contractor performance and the FAR states that it is allowed, there is no guidance on how to balance or combine these contract types. OMB’s guidance states that award fees must be tied to demonstrated results, as opposed to effort, in meeting or exceeding specified performance standards. Agencies varied in the extent to which criteria used in contracts allowed them to evaluate results. For example, several DOD contracts we reviewed have included more clearly defined criteria, including the Joint Strike Fighter program that has, according to program officials, created formulas that measure software performance, warfighter capability, and cost control. The criteria, based on metrics, constitute about 30 percent of the total award fee pool. In comparing periods before and after the application of these criteria, the contractor has consistently scored lower in the performance areas than in previous periods where less defined criteria were applied. Because the program has been able to more accurately assess contractor performance, the program has saved almost $29 million in less than 2 years of the policy change. Similarly, our review of a contract for a missile defense system found that greater adherence to cost and schedule criteria prevented the program from paying $39 million for events that did not take place within specified time frames. In addition to the Joint Strike Fighter, other DOD programs that were active before the guidance was issued and not required to follow it have incorporated it voluntarily with program and contracting officials recognizing the benefits of applying the new practices. In some cases they were able to do this through unilateral changes to the award fee plan. In others, changes required negotiations with the contractor. However, in other contracts we reviewed we found criteria being used to evaluate contractor performance that had little to do with acquisition outcomes. For example, an HHS contract for call center services awarded fees based on 19 performance categories which included results based criteria, such as response times, but also included criteria based more on efforts, such as requiring the contractor to ensure that staffing levels were appropriate for forecasted volumes during hours of operation, rather than measuring results. The amount of fee established for satisfactory performance or meeting contract requirements generally awards the contractor for providing the minimum effort acceptable to the government. In our review of contracts, we found that programs used a broad range in setting the amount of fee available for satisfactory performance, but many set it at a level that left little fee to motivate excellent contractor performance. For example, DOE’s Office of Science uses a model that sets the amount of fee able to be earned for meeting expectations at 91 percent, thus leaving 9 percent to motivate performance that exceeds expectations. In contrast, in an HHS contract for management, operations, professional, technical and support services for National Institute of Allergy and Infectious Diseases animal care facilities, the contractor earns 35 percent of the award fee for satisfactory performance, leaving 65 percent of the fee to motivate excellent performance. In an effort to truly concentrate the award fee on excellent performance, one contract we reviewed for Medicare services provides no award fee for satisfactory performance. NASA’s guidance establishes satisfactory at a level that leaves 30 percent to motivate above satisfactory performance. DOD’s guidance states that satisfactory performance should earn no more than 50 percent of the available award fee. This allows the program to incentivize above satisfactory performance with the remaining 50 percent of the award fee. However, not all DOD programs have followed its guidance. For example, a Missile Defense Agency (MDA) contract signed in December 2007, awards the contractor up to 84 percent of the award fee pool for satisfactory performance, which the agency defines as meeting most of the requirements of the contract. This leaves only 16 percent of the award fee pool to motivate performance that fully meets contract requirements or is considered above satisfactory. While the scale on which the contractors are evaluated is important in determining how much fee is reserved for motivating excellent performance, the judgment of the evaluators and their interpretation of the scale also have an effect. Contracting officers we spoke with varied in their interpretation of how to use the evaluation scale. While DOD has provided guidance on defining adjectival ratings for contractor performance, some programs continue to define meeting contract requirements as excellent performance. For example, on an Air Force program contracting for support services for staff stationed overseas, a contracting official stated that the contractor “has to do a pretty bad job to receive a rating of “good”,” a rating that pays in excess of 85 percent of the award fee. The median award fee for this particular Air Force program is 100 percent over the course of 8 award fee periods over 2 contracts. These evaluations provide little motivation for improved performance despite fee determination letters that consistently noted that the contractor had room to improve. The data we collected on over 645 award fee periods in 100 contracts provided a wide range of evaluation scores, including 6 periods in which the contractor earned no fee. However, our analysis of data collected from DOE and HHS, which included all contracts over $50 million that were identified as award fee contracts from fiscal year 2004 through 2008, showed that the median award fee paid at these agencies was over 90 percent of available award fees as shown in table 4. Contractors were routinely rated at a level that reflected excellent performance. DOD’s own analysis of its use of award fees in 2007 also showed that it pays a median of 93 percent of available award fees. While our review of NASA contracts was limited to three active contracts that were reviewed in our previous work, they too had a median of 90 percent of available fees paid. The median award fee paid at DHS, also shown in table 4, was 83 percent of available fees, indicating that its contractors are typically rated lower than those at the other agencies. DOD, NASA, and OMB have promulgated guidance that no award fee should be paid for performance that does not meet contract requirements or is judged to be unsatisfactory. However, while the median award fee scores indicate satisfaction with the results of the contracts, programs across the agencies we reviewed continue to use evaluation tools that could allow for contractors to earn award fees without performing at a level that is acceptable to the government under the terms of the contract. For example, an HHS contract for maintaining a Medicare claims processing system rates contractor performance on a point scale, from 0 to 100, where the contractor can receive up to 49 percent of the fee for unsatisfactory performance, 50 to 69 percent for marginal performance, and 70 to 79 percent for satisfactory performance (defined as meeting contract requirements). Therefore, the contractor could receive up to 79 percent of the award fee for satisfactory performance, or $1.8 million over the course of the contract. Another contract for operations and technical support at the National Cancer Institute uses a scale that awards up to 59 percent of the award fee for performance that is described as failing to meet customer requirements. The same scale provides up to 79 percent of the award fee, while still not requiring the contractor to fully meet customer requirements. In the contracts we reviewed, DOE’s median award fee paid was 91 percent, indicating satisfaction with the results of the contracts. However, divisions use different approaches in evaluating contractor performance. While the evaluation tool used by NNSA does not allow for payment of award fees for unsatisfactory performance, the evaluation method used by the Office of Science allows a contractor to earn up to 84 percent of the award fee for performance that is defined as not meeting expectations. Contracting officers we spoke with defined meeting expectations differently with some stating that a contractor who performed satisfactorily would meet expectations and others requiring exceptional performance to meet their expectations. In 2007, the Office of Science eliminated use of adjectival distinctions such as “satisfactory” and “excellent” in favor of letter grades and a numerical score system to communicate performance levels and determine award fee amounts. Current Office of Science guidance tasks each site office, with assistance from headquarters, with determining the requirements and milestones for each performance measure and target. While the office has favored the new system, it has not provided instructions on defining satisfactory performance or equating letter grades to adjectival language used in the OMB guidance. Further, current award fee plans for some programs using the Office of Science lab appraisal process allow for award fee to be earned at the C level, which guidance defines as performance in which “a number of expectations ... are not met and/or a number of other deficiencies are identified” with potentially negative impacts to the lab and mission. As much as 38 percent of fee can be earned for objectives that fall in this category, according to Office of Science guidance, establishing a system that rewards below standard performance. While having an evaluation tool in place to prevent award fees from being paid for unsatisfactory performance is important, it is equally important to adhere to the tool that is used. In a Customs and Border Protection contract for maintenance of aircraft, the contractor switched to a more costly method of hazardous waste disposal to reduce its own perceived risks without communicating with the government. The evaluation described the lack of communication as questionable use of taxpayer funds for parochial interests without the coordination and consultation of government representatives. The evaluation noted that the contractor’s approach was egregious and gave the contractor the minimum score of 70, stating that eliminating the fee entirely for poor communication would ignore its performance in other areas. However, in two subsequent periods when the contractor did not respond to identified areas for improvement, the program determined the contractor’s performance to be marginal, resulting in no award fee being paid for those periods. DOD is currently the only agency required to collect data, evaluate the effectiveness of award fees, and share proven strategies in using this contract type. While DOD has collected information on award fee contracts in 2007 and 2008 in accordance with legislative requirements, these data are not being used to evaluate the effectiveness of award fee contracts. While the 2009 National Defense Authorization Act directs that the FAR be amended to require executive agencies to collect data on award fees, other agencies do not collect these data outside of individual programs. However, within certain programs, automated tools are being used to evaluate the use of award fees. Further, while OMB directed agencies to broadly disseminate its guidance and suggested that agencies find and share information on these contracts using existing web based resources, contracting officials we spoke with stated that they rely on informal networks for sharing information on the use of award fees. While programs have paid more than $6 billion in award fees over the course of the 100 contracts in our review, none of the five agencies has developed methods for evaluating the effectiveness of an award fee as a tool for improving contractor performance. Instead, program officials noted that the effectiveness of a contract is evident in the contractor’s ability to meet the overall goals of the program and respond to the priorities established for a particular award fee period. However, officials were not able to identify the extent to which successful outcomes were attributable to incentives provided by award fees versus external factors, such as maintaining a good reputation. When asked how they would respond to a requirement to evaluate the effectiveness of an award fee, officials stated that they would have difficulty developing performance measures that would be comparable across programs. Additionally, officials at NASA noted that while cost and schedule are relatively easy to measure, the government may not fully realize the effectiveness of performance until the end of a program. For example, in a satellite program, a contractor’s performance becomes meaningless without a successful launch. Of the five agencies we reviewed, DOD is the only agency that collects some type of data on award fee contracts. In 2006, legislation required DOD to develop guidance on the use of award fees that included ensuring that the department collects relevant data on award and incentive fees paid to contractors and that it has mechanisms in place to evaluate such data on a regular basis. In response to the new DOD guidance, data were collected on 576 contract actions placed under 350 contracts for which fee or incentive determinations were made during calendar year 2007. This included $2.3 billion in award and incentive fees available during the period. DOD officials told us that they have shared the analysis of these data with the Senior Procurement Executives of the military services and other Defense agencies. Additionally, the legislation required guidance to include performance measures to evaluate the effectiveness of award and incentive fees as a tool for improving contractor performance and achieving desired program outcomes. However, DOD was not able to establish metrics to evaluate the effectiveness of award fees in terms of performance. DOD pointed out that the data collected on objective efficiencies do not reflect any consideration of the circumstances that affected performance, a critical element in determining award fees. DOD, which compared fees earned to cost and schedule measurements, stated in its analysis that the metrics used to evaluate the effectiveness of the incentives included 137 actions that measured cost and schedule efficiencies. While this was 24 percent of the actions it reviewed, it represented 67 percent of the award fees paid. DOD officials noted that the data indicated that lower fees were earned when cost or schedule efficiencies were less than 90 percent. While no agency has developed a tool to track and evaluate the use of award fees, some programs we reviewed have done so individually. Citing that automation can increase the effectiveness, efficiency, transparency, and integrity of the award fee process, one MDA program has developed an automated award fee tool that allows government employees to evaluate, comment on, and offer feedback on all performance criteria. The tool also captures performance inputs and descriptions of performance standards and allows administrators to analyze user ratings to normalize and remove rating bias. While the tool is still in the stages of final testing, MDA program officials stated that the tool has provided this particular MDA program with immediate and effective results in managing the award fee process. However, this automated system has not been implemented across the agency and not all MDA program officials believe that it is beneficial. Similarly, the National Cancer Institute uses a Web-based interface that collects performance information provided by the contractor’s customers to facilitate performance assessments. Officials stated that this tool saves them numerous hours of collecting and sifting through performance data and ensures that all evaluators are making judgments based on the same materials. The guidance issued by OMB in December 2007 included instructions for broad dissemination to agency personnel who have responsibilities for the effective planning, execution, and management of acquisitions. In addition, according to an OMB official, many agencies served on an interagency working group that was created at the suggestion of the guidance. Participation on the working group was at the agency headquarters level and involved officials from each of the agencies we reviewed. The interagency working group initiated a separate working group to review and amend the FAR. However, contracting officials at offices within DOE, DOD, DHS, and HHS that develop and execute award fee guidance and practices were not specifically represented in either group, were generally not aware of either of these groups, and were not asked to provide opinions, perspectives, or experiences to either group. Recent legislation required DOD to develop guidance to provide mechanisms for sharing proven incentive strategies for the acquisition of different types of products and services among contracting and program management officials. The Defense Acquisition University (DAU) has established an online community of practice on award fees and is currently developing additional guidance for DOD on the use of award fee contracts. Within DOD, we found that information sharing on best practices and lessons learned is inconsistent between contracting commands. For example, contracting officers at one Air Force command showed us specific guidance and document templates that they received along with detailed training on using award and incentive fee contracts. However, at another Air Force command, contracting officers told us that they do not generally share strategies on using award fees and if they were to do so, it would be through informal networks. Contracting officers at DOE, DHS, and HHS also stated that they were unaware of any formal networks or resources for obtaining and sharing best practices, lessons learned, or other strategies for using award fee contracts. Instead, they rely on informal networks or existing guidance from other agencies such as DOD. Contracting officials noted that the specific nature of their missions makes it difficult to adopt the practices of other agencies. In some cases, contracting officials are taking steps to provide oversight for a number of contracts to achieve consistency and identify unsuccessful practices. For example, at MDA, NNSA, and one Air Force command, the determination of award fees is performed by a senior executive who compares the results of several contracts to ensure that a uniform evaluation process and common criteria are used when possible. Similarly, according to DOE procurement officials, at the Office of Environmental Management award fee plans are circulated among contracting officers and program managers who review them for criteria that have been successful or problematic in past contracts and at the Office of Science, award fee plans are reviewed and approved annually by headquarters. NASA has a similar process in which programs discuss their performance outcomes at a monthly meeting with the focus on one particular program. NASA officials stated that the use of award fees and the criteria being used to measure contractor performance are frequent topics in these meetings. Award fee contracts can motivate contractor performance when certain principles are applied. Linking fees to acquisition outcomes ensures that the fee being paid is directly related to the quality, timeliness, and cost of what the government is receiving. Limiting the opportunity for contractors to have a second chance at earning previously unearned fee maximizes the incentive during an award fee period. Additionally, the amount of fee earned should be commensurate with contractor performance based on evaluation factors designed to motivate excellent performance. Further, no fee should be paid for performance that is judged to be unsatisfactory or does not meet contract requirements. DOD, through revised guidance, has realized benefits from applying these practices in some of its contracts, including some that, because they were active prior to its issuance, are not required to follow the guidance. While these principles have been stated in OMB’s guidance, they have not been established fully in guidance at all five agencies we reviewed, notably DOE, DHS, and HHS. Guidance, while an important first step, will not achieve the desired effect of motivating excellent contractor performance unless it is consistently implemented. Based on our work, this guidance is not being consistently implemented. Further, the lack of methods to evaluate effectiveness and information sharing among and within agencies has created an atmosphere in which agencies are unaware of whether these contracts are being used effectively and one in which poor practices go unnoticed and positive practices are isolated. To ensure broad implementation of OMB’s guidance and positive practices in using award fees, we are making three recommendations to executive agencies. We recommend that the Secretaries of Energy, Health and Human Services, and Homeland Security update or develop implementing guidance on: developing criteria to link award fees to acquisition outcomes such as cost, schedule, and performance; using an award fee in combination with incentive fees to maximize the effectiveness of subjective and objective criteria; determining when rolling over unearned fees to subsequent periods establishing evaluation factors, including definitions of performance, associated fees, and evaluation scales, that motivate contractors toward excellent performance; and prohibiting payments of award fees for performance that is judged to be unsatisfactory or does not meet contract requirements. To promote the application of existing guidance and expand upon improvements made in using award fees, we recommend that the Secretary of Defense: in preparation for regulatory changes to the FAR , emphasize the importance of consistently adhering to current guidance for all contracts in the interim; review active contracts issued before the effective date of the 2007 guidance for opportunities to apply the guidance when efficiencies can be obtained through unilateral decisions at a minimal cost to the government; and provide guidance on using award fees in combination with incentive fees to maximize the effectiveness of subjective and objective criteria. To assist agency officials in evaluating the effectiveness of award fees, we recommend that the Secretaries of Defense, Energy, Health and Human Services, and Homeland Security, and the Administrator of the National Aeronautics and Space Administration establish an interagency working group to (1) determine how best to evaluate the effectiveness of award fees as a tool for improving contractor performance and achieving desired program outcomes and (2) develop methods for sharing information on successful strategies. We provided a draft of this report to DOD, DOE, DHS, HHS, and NASA. In commenting, each agency concurred with our recommendations. DHS and HHS noted that they have been actively engaged in a FAR working group and indicated their intention of working with that group to address our recommendation for updated guidance. DOD’s response to its specific recommendation stated that it will emphasize the importance of consistently adhering to current guidance and will advise that this guidance be applied before the effective date as opportunities allow. Additionally, each agency noted that it is a member of an interagency incentive contracting working group and proposed that this group be leveraged to facilitate implementing our recommendation on identifying methods to evaluate the effectiveness of award fees and sharing successful strategies. We agree that working through these existing groups would be an adequate approach in implementing our recommendations. DOD, DOE, DHS, and NASA provided written comments that are included as appendices IV, V, VI, and VII respectively. HHS provided oral comments on our draft. In addition, agencies provided technical comments, which we incorporated where appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution until 30 days from the date of this report. We then will provide copies to the Secretaries of Defense, Energy, Health and Human Services, and Homeland Security and the Acting Administrator of the National Aeronautics and Space Administration. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 or huttonj@gao.gov if you have any questions regarding this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are acknowledged in appendix VIII. To identify the actions agencies have taken to revise or develop policies or guidance on the use of award fees, we assessed procurement policies at the departments of Defense (DOD), Energy (DOE), Health and Human Services (HHS), and Homeland Security (DHS) and the National Aeronautics and Space Administration (NASA). These five agencies provided over 95 percent of the total dollars obligated against contracts with an award fee in fiscal year 2008, according to the Federal Procurement Data System (FPDS). We reviewed our prior work on the use of award fees at DOD and NASA to identify policies and guidance in place and examined these agencies in regards to changes that were implemented based on our recommendations, legislative requirements, internal guidance, and governmentwide guidance from the Office of Management and Budget (OMB). For the other agencies, DOE, DHS, and HHS, we reviewed existing guidance on the use of award fees where available and compared it to OMB’s guidance. We interviewed procurement officials at each agency to discuss planned and implemented policy changes as they related to the OMB guidance. To determine whether current practices for using award fee contracts are consistent with OMB guidance, we reviewed data from 645 evaluation periods in 100 contracts at the five agencies from fiscal year 2004 through fiscal year 2008, allowing for a comparison of practices before and after OMB’s guidance. At DOD, we collected data on 40 active and follow-on award fee and multiple incentive type contracts used in our prior review. We also examined the 10 award fee contracts for over $10 million that were signed after the DOD guidance’s effective date of August 1, 2007 and had held at least one award fee evaluation. Where applicable, we identified the programmatic and monetary effect of implementing policy changes. We estimated cost savings at DOD achieved through the limitation of rollover of unearned fees and other changes in award fee practices consistent with 2007 DOD guidance by comparing the dollar amounts of rollover as a proportion of total available award fee pools before and after our recommendation to issue guidance on when rollover is appropriate. We also examined each program’s savings from canceling their rollover policy by projecting a reasonable dollar amount based upon historical data that they would have paid in rollover had they continued using the original policy. For award fee periods that have taken place or will take place in fiscal years 2009 and 2010, we estimated the amount of unearned fee based on historical averages. At NASA, we reviewed 3 active contracts from our prior review of 10 CPAF contracts. In our prior review, we extracted information from FPDS on the top ten dollar value NASA contracts active between fiscal years 2002 and 2004 that were coded as CPAF. At DOE, DHS, and HHS, we collected data on 47 contracts that represent the universe of CPAF, fixed-price-award-fee, and multiple incentive type contracts with an award fee component that had obligations greater than $50 million from fiscal year 2004 through fiscal year 2008. To ensure the validity of the database from which we drew our contracts, we confirmed the contract type of each of the 47 contracts we selected through DOE, DHS, and HHS contracting officers and contract documentation. Contracts in our sample conducted at least one award fee period between fiscal years 2004 and 2008 and issued a letter of notification (fee determination letter) to the contractor regarding at least one award fee payment. For each of the 100 award fee contracts in our sample of the five agencies, we collected four primary data points for each evaluation period: (1) the award fee available, (2) the award fee paid,(3) the amount of unearned fee rolled over into subsequent evaluation periods, and (4) the end date of the award fee period. In most cases, contracting and program officials submitted the data from firsthand documentation such as award fee plans, contract modifications, and fee determining official letters. From these data, we calculated the percentage of the available fee that was awarded for individual evaluation periods, entire contracts to date, and the overall sample. We collected data from agencies within the five departments and met with selected procurement, contracting, and program officials to obtain the perspective of users of award fees. At these meetings we discussed experiences, policies, and guidance related to use of the award fees. Agencies from which we collected data include: Air Force Space and Missile Systems Center Air Force Materiel Command Air Force Aeronautical Systems Center Air Force Security Assistance Center Air Force Logistics Command Air Force Space Command Army Chemical Materials Agency White Sands Missile Range Fort Polk Army Reserves Army Space and Missile Defense Command Naval Air Systems Command Naval Sea Systems Command Space and Naval Warfare Systems Center National Nuclear Security Administration Office of Civilian Radioactive Waste Management Office of Legacy Management Office of Environmental Management Office of Health, Safety and Security Office of Science Strategic Petroleum Reserve Department of Health and Human Services: Agency for Healthcare Research and Quality Centers for Disease Control and Prevention Centers for Medicare and Medicaid Services National Institutes of Health Substance Abuse and Mental Health Services Administration Department of Homeland Security: Customs and Border Protection Federal Emergency Management Agency Transportation Security Administration U.S. Coast Guard We conducted this performance audit from August 2008 through May 2009 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient and appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Award fee: An amount of money added to a contract, which a contractor may earn in whole or in part by meeting or exceeding the criteria stated in the award fee plan. These criteria typically relate to subjective areas within quality, critical processes, technical ingenuity, cost-effective management, program management, subcontract management, and other areas that may have unquantifiable behaviors. Award fee plan: A document that captures the award fee strategy. The plan details the procedures for implementing the award fee by structuring the methodology of evaluating the contractor’s performance during each evaluation period. Award fee pool: The total of the available award fee for each evaluation period and base fee (if applicable) for the life of the contract. Award fee review board (AFRB): The AFRB evaluates the contractor’s overall performance for the evaluation period in accordance with the Award Fee Plan. The board is comprised of Government personnel only whose experience in acquisition allows them to analyze and evaluate the contractor’s overall performance. Base fee: An award-fee contract mechanism that is an amount of money over the estimated costs (typically in the range of 0 to 3 percent of the contract value), which is fixed at the inception of the contract and paid to the contractor regardless of performance in a cost-plus-award-fee contract. A base fee is similar to the fixed fee paid to a contractor under a cost-plus-fixed- fee contract that also does not vary for performance. Cost contract: A cost-reimbursement contract in which the contractor receives no fee. A cost contract may be appropriate for research and development work, particularly with nonprofit educational institutions or other nonprofit organizations, and for facilities contracts. Cost-plus-award-fee contract: A cost-reimbursement contract that provides for a fee consisting of a base amount (which may be zero) fixed at inception of the contract and an award amount, based upon a judgmental evaluation by the government, sufficient to provide motivation for excellence in contract performance. Cost-plus-incentive-fee contract: A cost-reimbursement contract that provides for an initially negotiated fee to be adjusted later by a formula that objectively measures the performance of the contractor. Cost-reimbursable contract: A contract that provides for payment of the contractor’s allowable cost to the extent prescribed in the contract not to exceed a ceiling. Evaluation criteria: The criteria that are used to grade each category of performance. The criteria should emphasize the most important aspects of the program to facilitate the contractor doing its utmost to deliver outstanding performance. The criteria should be specific to the program and clearly stated in the contract. Evaluation period: The period of time upon which an award fee is based. This can be a specific increment of time (one year) or based upon the completion of an event (preliminary design review). An award fee amount is tied to each period of time or each event and the award fee board determines the appropriate fee for this period of time subject to approval by the fee determining official. Fee determining official (FDO): The FDO makes the final determination regarding the amount of award fee earned during the evaluation period by the contractor. Fixed-price contract: A contract that provides for a price that is either fixed or subject to adjustment obligating the contractor to complete work according to terms and for the government to pay the specified price regardless of the contractor’s cost of performance Fixed-price-award-fee contract: A variation of the fixed-price contract in which the contractor is paid the fixed price and may be paid a subjectively determined award fee based on periodic evaluation of the contractor’s performance. Fixed-price incentive contract: A fixed-price contract that provides for adjusting profit and establishing the final contract price by application of a formula based on the relationship of total final negotiated cost to total target cost. Incentive contract: A contract used to motivate a contractor to provide supplies or services at lower costs and, in certain instances, with improved delivery or technical performance, by relating the amount of fee to contractor performance. Multiple incentive contract: A contract which contains both incentive and award fee criteria. This type of contract could be coded as a combination contract in the Federal Procurement Data System (FPDS). Provisional award fee payment: A payment made within an evaluation period prior to a final evaluation for that period. This payment is subject to restrictions and must be paid back to the government if the award fee board decides that this money was not earned. Reallocation: The process by which the Government moves a portion of the available award fee from one evaluation period to another for reasons such as Government-caused delays, special emphasis areas, and changes to the Performance Work Statement (PWS). Rollover: The process of transferring unearned available award fee from one evaluation period to a subsequent evaluation period, thus allowing the contractor an additional opportunity to earn that unearned award fee. Appendix III: OMB Guidance on the Use of Award and Incentive Fee Contracts planning required to implement an incentive type contract and the amount of additional resources required for monitoring and determining awards. Risk and cost analyses related to the use of award and incentive contracts should be prepared in writing and approved at a level above the contracting officer or as determined by the agency. Incentive fees must be predetermined in writing and processes for awarding the fees must be included or cross-referenced in the acquisition plan (see FAR 7.105(b)(4)(i)). This incentive fee plan should include standards for evaluating contractor performance and appropriate incentive fee amounts. When considering the incentive fee arrangement, the plan should distinguish between earning potential for satisfactory versus excellent performance. Metrics should clearly describe what is required and at what point a contractor is considered successful. Additionally, agencies should develop guidance on when it is appropriate to award rollovers of unearned fee to a subsequent evaluation period. Rolling over fees is not the preferred method for incentivizing the contractor to perform above satisfactorily and should be permitted on a limited basis and require prior approval of the appropriate agency official. Using the attachment as a guide, Chief Acquisition Officers should review and update existing agency guidance on incentive fee contracting practices to ensure that fees are awarded in accordance with current regulations and that the guidance addresses the concerns of this memorandum. In addition, during an agency’s internal audit process, incentive fee contracts should be reviewed as part of the program management review process. Information on how well incentive fees are achieving their intended purpose and other related lessons learned can be found and shared on the Acquisition Community Connection on https://acc.dau.mil/CommunityBrowser.aspx?id=105550&lang=en-US. To help develop best practices, guidance, and templates, OFPP requests that agencies identify an incentive and award fee point of contact. These individuals may be asked to contribute examples and lessons learned to an interagency working group or to assist in communication and awareness efforts. Please submit the person’s name, title, telephone number, and e-mail address to Susan Truslow at OFPP by January 7, 2008. Please ensure broad dissemination of this memorandum among agency personnel who have responsibilities for the effective planning, execution, and management of your acquisitions. Questions may be referred to Susan Truslow at (202) 395-6810 or struslow@omb.eop.gov or Pat Corrigan at (202) 395-6805 or pcorrigan@omb.eop.gov . Thank you for your attention to this important matter. cc: Chief Information Officers Consult agency policy and guidance that supplement FAR 16.4, Incentive Contracts. Ensure market research documentation and the acquisition plan sufficiently state desired outcomes, performance requirements, milestones, risks and cost benefits associated with choice of contract type (FAR 7.105). Conduct and document risk and cost/benefit analyses that support use of an incentive type o Conduct a risk assessment and ensure incentive strategies are consistent with the level of risk assumed by the contractor and motivate the contractor by balancing awards with negative consequences; o Determine whether administrative costs associated with managing the incentive fee are outweighed by the expected benefits; and o Ensure sufficient human resources are available to properly structure and monitor the contract. Ensure evaluation factors are: o Meaningful and measurable; o Directly linked to cost, schedule, and performance results; and o Designed to motivate excellence in contractor performance by making clear distinctions in possible award earnings between satisfactory and excellent performance. Ensure the incentive fee plan: o Defines clearly the standards of performance for each rating category (e.g., satisfactory, above satisfactory, excellent); o Defines clearly the percentage of fee the contractor should be paid for each of these o Documents roles and responsibilities for those involved in monitoring contractor performance and determining award fees; o Provides detailed guidance on steps in the evaluation process for agency o Establishes a base fee. Good business practice allows the contractor more than 0% for base fee. This way, the award fee promotes above average performance; and o Obtains appropriate approval in accordance with agency policy. Ensure rollover fees are allowed only in limited circumstances in accordance with agency policy. The following is GAO’s comment on the Department of Homeland Security’s letter dated May 28, 2009. While we agree DHS has taken several steps to improve the use of award fee contracts since the issuance of OFPP’s guidance, DHS’s changes to the Homeland Security Acquisition Manual do not fully address the issues in the OFPP guidance. As we point out in our report, the manual does not describe standards or definitions for determining various levels of performance nor does it address issues related to rollover. In addition to the individual named above, Thomas Denomme, Assistant Director; Ann Calvaresi Barr; Laurier Fish; Kevin Heinz; Julia Kennon; Farhanaz Kermalli; John Krump; Caryn E. Kuebler; Karen Sloan; and Monique Williams made key contributions to this report.
In prior work, GAO found that contractors were paid billions of dollars in award fees regardless of acquisition outcomes. In December 2007, the Office of Management and Budget (OMB) issued guidance aimed at improving the use of award fee contracts. GAO was asked to (1) identify agencies' actions to revise or develop award fee policies and guidance to reflect OMB guidance, (2) assess the consistency of current practices with the new guidance, and (3) determine the extent agencies are collecting, analyzing, and sharing information on award fees. GAO reviewed the Departments of defense (DOD), Energy (DOE), Health and Human Services (HHS), and Homeland Security (DHS) and the National Aeronautics and Space Administration (NASA)--agencies that constituted over 95 percent of the dollars spent on award fee contracts in fiscal year 2008. From fiscal year 2004 through fiscal year 2008, agencies have spent over $300 billion on contracts that include monetary incentives, or award fees, for performance that is evaluated against subjective criteria. OMB's guidance on using award fees includes principles such as limiting the opportunities for earning unearned fees in subsequent periods, linking award fees to acquisition outcomes, designing evaluation criteria to motivate excellent performance, and not paying for performance that is unsatisfactory. These principles are largely reflected in DOD's and NASA's updated guidance on the use of award fees. For example, DOD now prohibits payment of award fees for unsatisfactory performance, and NASA requires a documented cost-benefit analysis to support the use of an award fee contract. However, DOE, DHS, and HHS vary in the extent to which their agency-wide guidance reflects the OMB guidance. These departments generally rely on operational divisions to develop award fee guidance; however, many acquisition professionals at these agencies were unaware of the contents of the OMB guidance. Current practices for using award fee contracts at agencies GAO reviewed often are inconsistent with the new guidance. However, where the revised policies have been applied, the results have been hundreds of millions of dollars in cost savings and better use of government funds. For example, by limiting second chances at unearned fees in eight programs, GAO estimates that DOD will save over $450 million through fiscal year 2010. These practices, however, are not being implemented across DOD. NASA programs now document cost benefit analyses to justify using award fee contracts. Without clear guidance, agencies within DOE, HHS, and DHS have developed various approaches to using award fees. For example, while DOE's median award fee paid indicates satisfaction with the results of its contracts, its Office of Science uses a scoring system that could allow for payment of up to 84 percent of an award for performance that does not meet expectations. Most of the agencies we reviewed continue to allow contractors second chances at unearned fees. For example, at DHS, a contractor was able to earn 100 percent of its unearned fee in a subsequent period. Agencies do not always use criteria that are based on measuring results. For example, one HHS contract for a call center included criteria that focused more on efforts, such as maintaining proper staffing levels during hours of operation, rather than on measuring results. Only DOD collects data on the use of award fees. However, the data are largely used to respond to legislative requirements for award fee information. Agencies generally do not have methods to evaluate the effectiveness of award fees. While individual programs and some offices have taken steps to evaluate award fee criteria, officials stated that identifying metrics to compare performance across programs would be difficult. Further, while GAO found effective practices within some agencies, the lack of a governmentwide or, with the exception of DOD, agencywide forum to share information allows these to remain isolated examples of potential best practices.
Investments in IT can enrich people’s lives and improve organizational performance. During the last two decades the Internet has matured from being a means for academics and scientists to communicate with each other to a national resource where citizens can interact with their government in many ways, such as by receiving services, supplying and obtaining information, asking questions, and providing comments on proposed rules. However, while these investments have the potential to improve lives and organizations, some federally funded IT projects can—and have— become risky, costly, unproductive mistakes. We have previously testified that the federal government has spent billions of dollars on failed or troubled IT investments, such as the Office of Personnel Management’s Retirement Systems Modernization program, which was canceled in February 2011, after spending approximately $231 million on the agency’s third attempt to automate the processing of federal employee retirement claims; the tri-agency National Polar-orbiting Operational Environmental Satellite System, which was stopped in February 2010 by the Administration after the program spent 16 years and almost $5 billion; the Department of Veterans Affairs’ Scheduling Replacement Project, which was terminated in September 2009 after spending an estimated $127 million over 9 years; and the Department of Health and Human Services’ (HHS) Healthcare.gov website and its supporting systems, which were to facilitate the establishment of a health insurance marketplace by January 2014, encountered significant cost increases, schedule slips, and delayed functionality. In a series of reports we identified numerous planning, oversight, security, and system development challenges faced by this program and made recommendations to address them. In light of these failures and other challenges, last year we introduced a new government-wide high-risk area, Improving the Management of IT Acquisitions and Operations. 18F and USDS were formed in 2014 to help address the federal government’s troubled IT efforts. Both programs have similar missions of improving public-facing federal digital services. 18F was created in March 2014 by GSA with the mission of transforming the way the federal government builds and buys digital services. Agencies across the federal government have access to 18F services. Work is largely initiated by agencies seeking assistance from 18F and then the program decides how and if it will provide assistance. According to GSA, 18F seeks to accomplish its mission by providing a team of expert designers, developers, technologists, researchers, and product specialists to help rapidly deploy tools and online services that are reusable, less costly, and are easier for people and businesses to use. In addition, 18F has several guiding principles, to include the use of open source development, user-centered design, and Agile software development. 18F is an office within the Technology Transformation Service within GSA that was recently formed in May 2016. 18F is led by the Deputy Commissioner for the Technology Transformation Service, who reports to the service’s Commissioner. Prior to May 2016, 18F was located within the Office of Citizen Services and Innovative Technologies and reported to the Associate Administrator for Citizen Services and Innovative Technologies. In January 2016 GSA began piloting a new organizational structure for 18F that centers around five business units. Custom Partner Solutions. Provides agencies with custom application solutions. This unit also provides consulting services to assist agencies in deciding whether to build, what to build, how to build it, and who will build it. Products and Platforms. Provides agencies with access to tools that address common government-wide needs. Transformation Services. Aims to improve how agencies acquire and manage IT by providing them with consulting services, to include new management models, modern software development practices, and hiring processes. Acquisition Services. Provides acquisition services and solutions to support digital service delivery, including access to vendors specializing in Agile software development, and request for proposal development consultation. Learn. Provides agencies with education, workshops, outreach, and communication tools on developing and managing digital services. To provide the products and services offered by each business unit, 18F relied on 173 staff to carry out its mission, as of March 2016. The staff are assigned to different projects that are managed by the business units. According to18F, the program used special hiring authorities for the vast majority of its staff: Schedule A excepted service authorities were used to hire 162 staff. These authorities permit the appointment of qualified personnel without the use of a competitive examination process. GSA has appointed its staff to terms that are not to exceed 2 years. According to the Director of the 18F Talent division, after the initial appointment has ended, GSA has the option of appointing staff to an additional term not to exceed 2 years. GSA funds 18F through the Acquisition Services Fund—a revolving fund, which operates on the revenue generated from its business units rather than an appropriation received from Congress. The Federal Acquisition Service, with the concurrence of the Administrator of General Services, has used the fund to invest in the development of 18F products and services that will be resold by GSA and used by other organizations. 18F is to recover costs through the Acquisition Services Fund reimbursement authority for work related to acquisitions and the Economy Act reimbursement authority for all other projects. According to the memorandum of agreement between 18F and the Federal Acquisition Service, 18F is required to have a plan to achieve full cost recovery. In order to recover its costs, 18F is to establish interagency agreements with partner agencies and will charge them for actual time and material costs, as well as a fixed overhead amount. Table 1 describes 18F’s revenue, expenses, and net operating results for fiscal years 2014 and 2015. Table 2 describes 18F’s projected revenue, expenses, and net operating results for fiscal years 2016 through 2019. As shown in table 2, according to its projections, 18F plans to generate revenue that meets or exceeds operating expenses and cost of goods sold beginning in fiscal year 2019. In May 2016 the GSA Inspector General reported on an information security weakness pertaining to 18F. Specifically, according to the report, 18F misconfigured a messaging and collaboration application, which resulted in the potential exposure of personally identifiable information (PII). 18F officials told us that, based on the preliminary results of their ongoing review, information such as individual’s first names, last names, e-mail addresses, and phone numbers were made available on the messaging and collaboration platform’s databases, and could have been accessible by authorized users of the application. Those officials also stated that, based on the preliminary results of their ongoing review, more sensitive PII, such as Social Security numbers and protected health information, were not exposed. They added that they are continuing a detailed review, in coordination with the GSA IT organization, to confirm that more sensitive PII were not made available. According to the Administration, in 2013 it initiated an effort that brought together a group of digital and technology experts from the private sector that helped fix Healthcare.gov. In an effort to apply similar resources to additional projects, in August 2014 the Administration announced the launch of USDS, to be led by an Administrator and Deputy Federal CIO who reports to the Federal CIO. According to OMB, USDS’s mission is to transform the most important public-facing digital services. USDS selects which projects it will apply resources to and generally initiates the effort with agencies. To accomplish its mission, USDS aims to recruit private sector experts (e.g., IT engineers and designers) and leading civil servants, and then deploy small teams to partner them with government agencies. With the help of these experts, OMB states that USDS applies best practices in product design and engineering to improve the usefulness, user experience, and reliability of the most important public-facing federal digital services. As of November 2015, USDS staff totaled about 98 individuals. Similar to 18F, USDS assigns individuals directly to projects aimed at achieving its mission. USDS has used special hiring authorities for the vast majority of its staff. Specifically: Schedule A excepted service. According to USDS, as of November 2015, 52 USDS staff members were hired using the Schedule A excepted service hiring authority. According to the USDS Administrator, appointments made using this authority are not to exceed 2 years. At the end of that period, staff can be appointed for an additional term of no more than 2 years. Intermittent consultants. According to USDS, as of November 2015, 39 USDS staff members were intermittent consultants—that is, individuals hired through a noncompetitive process to serve as consultants on an intermittent basis or without a regular tour of duty. The USDS Administrator explained that some of these staff are eventually converted to temporary appointments under the Schedule A authority. According to its Administrator, USDS does not generally make permanent appointments for its staff because it allows the program to continuously bring in new staff and ensure that its ideas are continually evolving. USDS reported spending $318,778 during fiscal year 2014 and approximately $4.7 million during fiscal year 2015. For fiscal year 2016, USDS plans to spend approximately $14 million, and the President’s fiscal year 2017 budget estimated obligations of $18 million for USDS. In an effort to make improvements to critical IT services throughout the federal government, the Presidents’ Budget for fiscal year 2016 proposed funding for the 24 Chief Financial Officers Act agencies, as well as the National Archives and Records Administration, to establish digital service teams. USDS policy calls for these agencies to, among other things, hire or designate an executive for managing their digital service teams. Additionally, USDS has established a hiring pipeline for digital service experts—that is, a unified process managed by USDS for accepting and reviewing applications, performing initial interviews, and providing agencies with candidates for their digital service teams. According to OMB, before using this service, agencies must agree to a charter with the USDS Administrator. Over the last three decades, several laws have been enacted to assist federal agencies in managing IT investments. For example, the Paperwork Reduction Act of 1995 requires that OMB develop and oversee policies, principles, standards, and guidelines for federal agency IT functions, including periodic evaluations of major information systems. In addition, the Clinger-Cohen Act of 1996, among other things, requires agency heads to appoint CIOs and specifies many of their responsibilities. With regard to IT management, CIOs are responsible for implementing and enforcing applicable government-wide and agency IT management principles, standards, and guidelines; assuming responsibility and accountability for IT investments; and monitoring the performance of IT programs and advising the agency head whether to continue, modify, or terminate such programs. Most recently, in December 2014, IT reform legislation (commonly referred to as the Federal Information Technology Acquisition Reform Act or FITARA) was enacted, which required most major executive branch agencies to ensure that the CIO had a significant role in the decision process for IT budgeting, as well as the management, governance, and oversight processes related to IT. The law also required that CIOs review and approve (1) all contracts for IT services associated with major IT investments prior to executing them and (2) the appointment of any other employee with the title of CIO, or who functions in the capacity of a CIO, for any component organization within the agency. OMB also released guidance in June 2015 that reinforces the importance of agency CIOs and describes how agencies are to implement the law. OMB plays a key role in helping federal agencies address these laws and manage their investments by working with them to better plan, justify, and determine how much they need to spend on projects and how to manage approved projects. Within OMB, the Office of E-Government and Information Technology, headed by the Federal CIO, directs the policy and strategic planning of federal IT investments and is responsible for oversight of federal technology spending. 18F and USDS have provided a variety of development and consulting services to agencies to support their technology efforts. Specifically, between March 2014 and August 2015, 18F staff helped 18 agencies with 32 projects and generally provided six types of services to the agencies, the majority of which related to development work. In addition, between August 2014 and August 2015, USDS provided assistance on 13 projects at 11 agencies and provided seven types of consulting services. Further, agencies were generally satisfied with the services they received from 18F and USDS. Specifically, of the 26 18F survey respondents, 23 were very satisfied or moderately satisfied and 3 were moderately dissatisfied. For USDS, all 9 survey respondents were very satisfied or moderately satisfied. Between March 2014 and August 2015, GSA’s 18F staff helped 18 agencies with 32 projects, and generally provided services relating to its five business units: Custom Partner Solutions, Products and Platforms, Transformation Services, Acquisition Services, and Learn. In addition, 18F also provided agency digital service team candidate qualification reviews in support of USDS. Custom Partner Solutions. 18F helped 11 agencies with a total of 19 projects relating to developing custom software solutions. Out of the 19 projects, 12 were related to website design and development. For example, regarding GSA’s Pulse project—a website that displays data about the extent to which federal websites are adopting best practices, such as hypertext transfer protocol over secure sockets layer (SSL)/ transport layer security (TLS) (HTTPS)—18F designed, developed, and delivered the first iteration of Pulse within 6 weeks of the project kick-off. According to the GSA office responsible for managing the project, the first iteration has led to positive outcomes for government-wide adoption of best practices; for example, between June 2015 and January 2016, the percentage of federal websites using HTTPS increased from 27 percent to 38 percent. As another example, officials from the Department of Education’s college choice project stated that 18F helped develop the project’s website, which the public can use to search among colleges to find schools that meet their needs (e.g., degrees offered, location, size, graduation rate, average salary after attendance). 18F also helped two agencies, HHS and the Department of Defense, on two projects to develop application programming interfaces—sets of routines, protocols, and tools for building software applications that specify how software components should interact. Acquisition Services. 18F helped seven agencies on seven projects relating to acquisition services consulting. For example, 18F provided the Department of State’s Bureau of International Information Programs with cloud computing services offered under a GSA blanket purchase agreement (BPA)—specifically, cloud management services (e.g., developers, testing and quality assurance, cloud architect) and infrastructure-as-a-service. According to the Department of State, the department was able to deploy its instance of the infrastructure service only 1 month after it executed an interagency agreement with 18F. In addition, according to Social Security Administration officials, 18F helped the agency to incorporate Agile software development practices into their requests for proposals for their Disability Case Processing System. Learn. 18F provided services to four agencies on four projects regarding training, such as educating agency officials on Agile software development. For example, 18F conducted training workshops on Agile software development techniques with the Social Security Administration and Small Business Administration. In addition, according to the Department of Labor’s Wage and Hour Division officials, 18F conducted a 3-day workshop on IT modernization. Transformation Services. 18F assisted two agencies on two projects to help acquire the people, processes, and technology needed to successfully deliver digital services. For example, 18F assisted the Environmental Protection Agency on an agency-wide technology transformation. According to an official within the office of the CIO, 18F assisted the agency with e-Manifest—a system used to track toxic waste shipments. The official noted that 18F provided user-centered design, Agile coaching, prototype development services, and Agile and modular acquisition services. Further, the official stated that 18F helped turn around the project and significantly decreased the time of delivery for e-Manifest. Products and Platforms. 18F helped two agencies on two projects related to developing software solutions that can potentially be reused at other federal agencies. For example, according to GSA officials responsible for managing GSA’s Communicart project, 18F provided the agency with an e-mail-based tool for approving office supply purchases. Agency digital service team candidate qualification review. 18F worked with USDS to recruit and hire team members for agency digital service teams. According to 18F officials, it provided USDS with subject matter experts to review qualifications of candidates for agency digital service teams. Of the 32 18F projects, 6 are associated with major IT investments. Cumulatively, the federal government plans to spend $853 million on these investments in fiscal year 2016. Additionally, risk evaluations performed by CIOs that were obtained from the IT Dashboard showed that three of these investments were rated as low or moderately low risk and three investments were rated medium risk. Table 3 describes the associated investments, including their primary functional areas, planned fiscal year 2016 spending, and CIO rating as of May 2016. 18F is also developing products and services—including an Agile delivery service blanket purchase agreement (BPA), cloud.gov, and a shared authentication platform: Agile delivery service BPA. 18F established this project in order to support its need for Agile delivery services, including Agile software development. In August and September 2015, GSA awarded BPAs to 17 vendors. The BPAs are for 5 years and allow GSA to place orders against them for up to 13 specific labor categories relating to Agile software development (e.g., product manager, backend web developer, Agile coach) at fixed unit prices. The BPAs do not obligate any funds; rather, they enable participating vendors to compete for follow-on task orders from GSA. In cases where 18F determines that it should use the Agile BPA to provide services to partner agencies, GSA anticipates that 18F will work with that agency to develop a request for quotations and the other documents needed for a competition with Agile BPA vendors. In June 2016 GSA issued its first task order under the Agile BPA for building a web-based dashboard that would describe the status of vendors in the certification process for the Federal Risk and Authorization Management Program (FedRAMP)—a government- wide program, managed by GSA, to provide joint authorizations and continuous security monitoring services for cloud computing services for all federal agencies. The initial BPAs were established under the first of three anticipated award pools—all of which are part of the “alpha” component of the Agile BPA project. 18F officials stated that they planned to establish BPAs for the other two pools in June 2016. They also anticipate a future beta version of the project that could potentially allow federal agencies beyond 18F to issue task orders directly to vendors. Officials stated that they expect to have a plan for the next steps of the beta version of this project by December of 2016. 18F officials have also expressed interest in creating additional marketplaces, such as those relating to data management, developer productivity tools, cybersecurity, and health IT. As of March 2016, 18F did not have time frames for when it planned to develop these additional marketplaces. Cloud.gov. 18F also developed the cloud.gov service, which is an open source platform-as-a-service that agencies can use to manage and deploy applications. 18F initially built cloud.gov in order to enable the group to use applications it developed for partner agencies. In creating the service, 18F decided to offer it to other agencies because, according to 18F officials, cloud.gov offers a developer- friendly, secure platform, with tools that agencies can use to accelerate the process of assessing information security controls and authorizing systems to operate. According to 18F, the goal of cloud.gov is to provide government developers and their contractor partners the ability to easily deploy systems to a cloud infrastructure with better efficiency, effectiveness, and security than current alternatives. According to a roadmap for cloud.gov, 18F plans to receive full FedRAMP Joint Authorization Board approval for this service by November 2016. Once available, the group anticipates requiring agencies to pay for this service through an interagency agreement with 18F. Shared authentication platform. In May 2016 18F announced that it was initiating an effort to create a platform for users who need to log into federal websites for government services. According to 18F, this system is designed to be each citizen’s “one account” with the government and allow the public to verify an identity, log into government websites, and if necessary, recover an account. As of May 2016, 18F plans to conduct prototyping activities through September 2016 and did not have plans beyond that time frame. In addition to developing future products and services, 18F created a variety of guides and standards for use internally as well as by agency digital service teams. These guides address topics such as accessibility, application programming interfaces, and Agile software development. From August 2014 through August 2015, USDS provided assistance on 13 projects across 11 agencies. The group generally provided seven types of consulting services: quality assurance, problem identification and recommendations, website consultation, system stabilization, information security assessment, software engineering, and data management. Quality assurance. Three of the 13 projects related to providing quality assurance services. For example, regarding the Social Security Administration’s Disability Case Processing System, USDS reviewed the quality of the software and made recommendations that, according to the agency, resulted in costs savings. Additionally, for the Departments of Veterans Affairs and Defense Service Treatment Record project, USDS provided engineers who identified and resolved errors in the process of exchanging records between the two departments, according to the Department of Veterans Affairs. Further, for the HHS Healthcare.gov system, the group performed services aimed at optimizing the reliability of the system, according to HHS. Problem identification and recommendations. USDS identified problems and made recommendations for three projects. For all three projects, it performed a discovery sprint—a quick (typically 2 week) review of an agency’s challenges, which is to culminate in a clear understanding of the problems and recommendations for how to address the issues. For example, according to USDS, the group performed a discovery sprint for the Department of the Treasury Internal Revenue Service that focused on three areas: authentication of taxpayers, modernizing systems through event-driven architecture, and redesigning the agency’s website. USDS delivered recommendations to the Internal Revenue Service with recommendations and also suggested that work initially focus on taxpayer authentication. Consistent with these recommendations, according to USDS, the group and the agency focused on authentication, to include re-opening of the online application Get Transcript. For the Department of Justice Federal Bureau of Investigation’s National Incident Based Reporting System, according to USDS, the group performed a discovery sprint and made several recommendations for accelerating deployment of the system. Website consultation. USDS provided consultation services for three agency website projects. For example, for the Office of the U.S. Trade Representative’s Trans-Pacific Partnership Trade Agreements website, USDS provided website design advice and confirmed that the agency had the necessary scalability to support the number of anticipated visitors. Additionally, it consulted with the Office of Personnel Management (OPM) on the design, implementation, and development of a website for providing information on reported data breaches. System stabilization. For the Department of State’s Consular Consolidated Database, according to USDS, it helped stabilize the system and return it to operational service after a multi-week outage in June 2015. Information security assessment. USDS helped with an information security assessment regarding Electronic Questionnaires for Investigations Processing, which encompasses the electronic applications used to process federal background check investigations. Software engineering. For the Department of Homeland Security U.S. Citizenship and Immigration Services Transformation project, USDS’s software engineering advisors provided guidance on private sector best practices in delivering modern digital services. According to the department, the group’s work has supported accomplishments such as increasing the frequency of software releases and improving adoption of Agile development best practices. Data management. For the Department of Homeland Security Office of Immigration Statistics, USDS helped to develop monthly reports on immigration enforcement priority statistics. According to the department, USDS supported the development of processes for obtaining data from other offices within the department and generating the monthly reports. According to the department, after 7 weeks of working with USDS, it was able to develop a proof of concept that reduced the report generating process from a month to 1 day. Seven of the 13 projects are associated with major IT investments. Cumulatively, the federal government plans to spend over $1.24 billion on these investments in fiscal year 2016. Three investments were rated by their CIOs as low or moderately low risk and four investments were rated as being medium risk. Table 4 describes the associated investments, including their primary functional areas, planned fiscal year 2016 spending, and CIO ratings as of May 2016. In addition to providing services to agencies, USDS has developed products to help agencies improve federal IT services. For example, it developed the Digital Services Playbook to provide government-wide recommendations on practices for building digital services. The group also created the TechFAR Handbook to explain how agencies can use the Digital Services Playbook in ways that are consistent with the Federal Acquisition Regulation. Further, USDS, in collaboration with 18F, developed the draft version of U.S. Web Design Standards, which includes a visual style guide and a collection of common user interface components. With this guide, USDS aims to improve government website consistency and accessibility. In addition to developing guidance, USDS, in collaboration with OMB’s Office of Federal Procurement Policy, used challenge.gov to incentivize the public to create a digital service training program for federal contract professionals. The challenge winner received $250,000 to develop and pilot a training program. Additionally, the Deputy Administrator for USDS stated that 30 federal contract professionals from more than 10 agencies completed this pilot program in March 2016. According to OMB, the program is being revised and transitioned to the Federal Acquisition Institute, where it will be included as part of a certification for digital service contracting officers. In response to a satisfaction survey we administered to agency managers of selected 18F and USDS projects, a majority of managers were satisfied with the services they received from the groups. Specifically, the average score for services provided by 18F was 4.38 (on a 5-point satisfaction scale, where 1 is very dissatisfied and 5 is very satisfied) and the average score for the services provided by USDS was 4.67. Table 5 describes the survey results for 18F and USDS. In addition to providing scores, the survey respondents also provided written comments. Regarding 18F, five factors were cited by two or more respondents as contributing to their satisfaction with the services the program provided: delivering quality products and services, providing good customer service, completing tasks in a timely manner, employing staff with valuable knowledge and skills, and providing valuable education to agencies. For example, one respondent stated that 18F has an expert staff that helped the team understand Agile software development and incorporate user-centered design into the agency’s development process. With respect to USDS, four factors were cited by two or more respondents as contributing to their satisfaction with its services: delivering quality services, providing good customer service, completing tasks in a timely manner, and employing staff with valuable knowledge and skills. For instance, one respondent stated that USDS responded to the agency’s request in a matter of hours, quickly developed an understanding of the agency’s IT system, and pushed to improve the system, even in areas beyond the scope of USDS’s responsibility. Although the majority of agencies were satisfied, a minority of respondents provided written comments describing their dissatisfaction with services provided by 18F. For example, six respondents cited poor customer service, four respondents cited higher than expected costs, and one respondent stated that 18F’s use of open source code may not meet the agency’s information security requirements. In a written response to these comments, 18F stated that it has received a variety of feedback from its partners and has modified and updated its processes continuously over the past 2 years. For example, with respect to higher than expected costs, 18F stated that project costs sometimes needed to be adjusted mid-project to address, among other things, higher than expected infrastructure usage or unexpected delays. To address this issue, 18F stated that it uses the assistance of subject matter experts to estimate project costs, and wrote a guide to assist with, among other things, better managing the budgets of ongoing projects. Regarding 18F’s use of open source code, it stated that it has worked with its partners to discuss the use of open source software and information security practices. To assess actual results, prioritize limited resources, and ensure that the most critical projects receive attention, USDS and 18F should establish and implement the following key practices: Define outcome-oriented goals and measure performance. Our previous work and federal law stress the importance of focusing on outcome-oriented goals and performance measures to assess the actual results, effects, or impact of a program or activity compared to its intended purpose. Goals should be used to elaborate on a program’s mission statement and should be aligned with performance measures. In turn, performance measures should be tied to program goals and demonstrate the degree to which the desired results were achieved. To do so, performance measures should have targets to help assess whether goals were achieved by comparing projected performance and actual results. Finally, goals and performance measures should be outcome-oriented—that is, they should address the results of products and services. Establish and implement procedures for prioritizing IT projects. We have reported that establishing and implementing procedures, to include criteria, for prioritizing projects can help organizations consistently select projects based on their contributions to the strategic goals of the organization. Doing so will better position agencies to effectively prioritize projects and use the best mix of limited resources to move toward its goals. 18F has developed several outcome-orientated goals, performance measures, and procedures for prioritizing projects, which it has largely implemented. However, not all of its goals are outcome-oriented and it has not yet measured program performance. Define Outcome-Oriented Goals and Measure Performance At the conclusion of our review in May 2016, 18F provided 5 goals and 17 associated performance measures that the organization aims to achieve by September 2016 (see table 6). To 18F’s credit, several of its goals and performance measures appear to be outcome-oriented. For example, the goal of delivering two government-wide platform services and the associated performance measures are outcome-oriented in that they address results—that is, delivering services to partner agencies. However, not all of the goals and performance measures appear to be outcome-oriented. For example, the goal of growing 18F to 215 staff while sustaining a healthy culture and its associated measure of hiring 47 staff do not focus on results of products or services. Further, not all of the performance measures have targets. For example, seven of the performance measures state that 18F will establish performance indicators, but 18F has yet to do so. Moreover, 18F does not have goals and associated measures that describe how it plans to achieve its mission after September 2016. In addition, although 18F is required to have a plan to achieve full cost recovery, it has yet to recover costs and its projections for when this will occur have slipped over time. Specifically, in June 2015, 18F projected that it would fully recover its costs for an entire fiscal year beginning in 2016; however, in May 2016, 18F provided revised projections indicating that it would recover costs beginning in fiscal year 2019. Those projections also indicated that, in the worst case, it would not do so through 2022, the final year of its projections. Establishing performance measures and targets that are tied to achieving full cost recovery would help management gauge whether the program is on track to meet its projections. However, 18F has not established such performance measures and targets. Finally, 18F has yet to fully assess the actual results of its activities. Specifically, the group has not assessed its performance in accordance with the 17 performance measures it developed. 18F’s then-parent organization assessed its own performance quarterly beginning in the 4th quarter of fiscal year 2015, including for measures that 18F was responsible for. However, this review process did not include or make reference to the 17 measures developed to gauge 18F’s performance, and thus do not provide insight into how well it is achieving its own mission. In a written response, GSA stated that 18F performance is measured as part of the Technology Transformation Service’s goals and measures and that these goals and measures should form the basis for our review. However, the Technology Transformation Service’s goals and measures do not describe how GSA aims to achieve the specific mission of 18F. Until it establishes goals and performance measures beyond September 2016, ensures that all of its goals and performance measures are outcome-oriented, and that its performance measures have targets, 18F will not have a clear definition of what it wants to accomplish. Additionally, without developing performance measures and targets tied to achieving full cost recovery, GSA will lack a fully defined approach to begin recovering all costs in fiscal year 2019. Further, until 18F fully measures actual results, it will not be positioned to assess the status of its activities and determine the areas that need improvement. Establish and Implement Procedures for Prioritizing IT Projects 18F has developed procedures, including criteria, for prioritizing projects and largely implemented its procedures. Specifically, according to the Director of Business Strategy, potential projects are discussed during weekly intake meetings. As part of these meetings, 18F discusses project decision documents, which outline the business, technical, and design elements, as well as the schedule, scope, and resources needed to fulfill the client’s needs. Using these documents, 18F determines whether proposed projects meet, among other things, the following criteria: (1) the project is aligned with the products and services offered by 18F, (2) it can be completed in a time frame that meets the agency’s needs and at a cost that fits the agency’s budget, and (3) the project’s government transformation potential (e.g., impact on the public, cost savings). These documents are used by the business unit leads to make a final decision about whether to accept the projects. 18F has largely implemented its procedures. To its credit, with respect to the 14 projects that 18F selected since establishing its prioritization and selection process, 18F developed a decision document for 12 of the 14 projects. However, 18F did not develop a decision document for the 2 remaining projects—the Nuclear Regulatory Commission’s Master Data Management project and GSA’s labs.usa.gov project. With respect to the Nuclear Regulatory Commission’s Master Data Management project, 18F officials explained that this project only required staff from one division; as such, that division was able to independently prioritize and select this project. Additionally, regarding the GSA labs.usa.gov project, 18F officials said the Associate Administrator for the Office of Citizen Services and Innovative Technologies directed 18F to provide assistance. If 18F consistently follows its process for prioritizing projects, it will be better positioned to apply resources to IT projects with the greatest need of improvement. While USDS has developed program goals and a process for prioritizing projects, it has not fully implemented important program management practices. Define Outcome-Oriented Goals and Measure Performance In November 2015 USDS developed four goals to be achieved by December 2017: (1) recruit and place over 200 digital service experts in strategic roles at agencies and cultivate a continually growing pipeline of quality technical talent through USDS, (2) measurably improve five to eight of the government’s most important services, (3) begin the implementation of at least one outstanding common platform, and (4) increase the quality and quantity of technical vendors working with government and cultivate better buyers within government. Additionally, USDS established a performance measure with a target for one of its goals. Specifically, it has a measure for its first goal as it plans to measure the extent to which it will hire 200 digital service experts by December 2017. To its credit, several of the goals appear to be outcome-oriented. For example, improving five to eight services is outcome-oriented in that it addresses results. However, USDS has not established performance measures or targets for its other goals. In addition, the program’s first goal—recruit and place over 200 digital service experts in strategic roles at agencies and cultivate a continually growing pipeline of quality technical talent through USDS—does not appear to be outcome-oriented. Further, USDS has only measured actual results for one of its goals. Specifically, for the goal of placing digital service experts at agencies, as of May 2016, USDS officials stated that they had 152 digital service experts. However, USDS has not measured actual results for the other three goals. USDS officials provided examples of how they informally measure performance for the other three goals. For example, for the goal of measurably improving five to eight of the government’s most important services, the USDS Administrator stated that approximately 1 million visitors viewed the Department of Education’s College Scorecard website in the initial days after it was deployed. However, USDS has not documented these measures or the associated results to date. Until USDS ensures that all of its goals are outcome- oriented and establishes performance measures and targets for each goal, it will be difficult to hold the program accountable for results. Additionally, without an assessment of actual results, it is unclear what impact USDS’s actions are having relative to its mission and whether investments in agency digital service teams are justified. Establish and Implement Procedures for Prioritizing Projects USDS has developed procedures and criteria for prioritizing projects. To identify projects to be considered, USDS is to use, among other sources, June 2015 and June 2016 OMB reports to Congress that identify the 10 highest-priority federal IT projects in development. To prioritize projects, USDS has the following three criteria, which are listed in their order of importance (1) What will do the greatest good for the greatest number of people in the greatest need? (2) How cost-efficient will the USDS investment be? and (3) What potential exists to use or reuse a technological solution across the government? Using these criteria, USDS intends to create a list of all potential projects, to include their descriptions and information on resources needs. This list is to be used by USDS leadership to make decisions about which projects to pursue. To its credit, USDS created a list of all potential, ongoing, and completed projects, which included project descriptions and resource needs. Additionally, USDS has engaged with 6 of the 10 priority IT projects identified in the June 2015 and June 2016 reports, including HHS’s Healthcare.gov project and the Department of Homeland Security’s U.S. Citizenship and Immigration Services Transformation. Additionally, according to a USDS staff member, USDS considered the remaining 4 projects and decided not to engage with them to date. Although USDS has yet to develop a quarterly report on the 10 high priority programs, which it was directed by Congress to develop, it expects to issue its first report by September 2016. Specifically, in December 2015, Congress modified its direction for the Executive Office of the President to develop the reports regarding the top 10 high priority programs and specifically called for USDS to do so on a quarterly basis. If USDS develops its report on the 10 high priority programs within the established time frame and on a quarterly basis thereafter, and considers the programs identified in these reports as part of its prioritization process, it will have greater assurance that it will apply resources to the IT projects with the greatest need of improvement. To help agencies effectively deliver digital services, the President’s Budget for fiscal year 2016 proposed funding for digital service teams at 25 agencies—the 24 Chief Financial Officers Act agencies, as well as the National Archives and Records Administration. According to USDS policy, agencies are to, among other things, hire or designate an executive for managing their digital service teams. In addition, USDS has called for the deputy head of these agencies (or equivalent) to, among other things, agree to a charter with the USDS Administrator. After agreeing to a charter, according to USDS, agencies can use USDS’s hiring pipeline for digital service experts. Of the 25 agencies included in the President’s budget proposal to establish teams, OMB has established charters with 6 agencies for their digital service teams—the Departments of Defense, Health and Human Services, Homeland Security, the Treasury, State, and Veterans Affairs. The charters establish the executives for managing digital service teams and describe the reporting relationships between the team leaders and agency leadership. In addition, according to the Deputy USDS Administrator, USDS expects to establish charters with an additional 2 agencies by the end of the fiscal year—the Department of Education and the Small Business Administration. For the remaining 16 agencies, as of April 2016, 8 agencies reported that they plan to establish digital service teams but have yet to establish charters with USDS—the Department of Housing and Urban Development, Environmental Protection Agency, General Services Administration, National Aeronautics and Space Administration, National Archives and Records Administration, National Science Foundation, Nuclear Regulatory Commission, and Office of Personnel Management. Of the other 9 agencies, 8 reported that they do not plan to establish digital service teams by September 2016 because they did not receive requested funding—the Departments of Agriculture, Commerce, Energy, the Interior, Justice, Labor, and Transportation; and the U.S. Agency for International Development. The remaining agency, the Social Security Administration, does not plan to establish a team because, according to officials, it does not have large, public-facing IT projects that are troubled. Table 7 summarizes agency and OMB efforts to establish digital service teams. Congress has recognized the importance of having a strong agency CIO. In 1996, the Clinger-Cohen Act established the position of agency CIO and, among other things, gave these officials responsibility for IT investments, including IT acquisitions, monitoring the performance of IT programs, and advising the agency head whether to continue, modify, or terminate such programs. More recently, in December 2014, FITARA was enacted into law. It required most major executive branch agencies to ensure that the CIO has a significant role in the decision process for IT budgeting, as well as the management, governance, and oversight processes related to IT. The law also required that CIOs review and approve (1) all contracts for IT services associated with major IT investments prior to executing them and (2) the appointment of CIOs for any component within the agency. OMB also released guidance in June 2015 that reinforces the importance of agency CIOs and describes how agencies are to implement FITARA. Further, according to our prior work, leading organizations clearly define responsibilities and authorities governing the relationships between the CIO and other agency components that use IT. Only one of the four agencies we selected for review—the Department of Homeland Security—defined the relationship between the executive for managing the digital service team and the agency CIO. Specifically, the Department of Homeland Security established a charter for its digital service team, signed by both the Administrator of USDS and the Deputy Secretary, which outlines the reporting structure and authorities for the digital services executive, including the relationship with the CIO. For example, according to the charter, the digital services executive will report on a day-to-day basis to the CIO, but will also report directly to the Deputy Secretary. However, the other three agencies we reviewed—the Departments of Defense, State, and Veterans Affairs—have not defined the role of agency CIOs with regard to these teams. Although they have established charters for these teams, which describe the reporting structure between the digital services executive and senior agency leadership, the charters do not describe the role of the agencies’ CIOs and they have not documented this information elsewhere. The Department of Defense CIO and the Department of Veterans Affairs Principal Deputy Assistant Secretary for the Office of Information and Technology told us that they work closely with their agency digital service teams. However, while these officials have coordinated with the agency digital service teams, the roles and responsibilities governing these relationships should be described to ensure that CIOs can carry out their statutory responsibilities. In contrast to the Departments of Defense and Veterans Affairs, the State CIO told us that he has had limited involvement in the department’s digital service team. He added that he believes it will be important for CIOs to be involved in agency digital service teams in order to sustain their efforts. In written comments, OMB acknowledged that the Department of State’s charter does not describe the role of the CIO, but stated that the Departments of Defense and Veterans Affairs digital service team charters at least partially address the relationship between digital service teams and agency CIOs. Specifically, with respect to the Department of Defense, OMB stated that the charter calls for senior leadership, including the department’s CIO, to ensure that digital service team projects proceed without delay. Additionally, according to OMB, the charter for the Veterans Affairs digital service team calls for the team to be located in and supported by the department’s CIO organization. However, these requirements do not address the specific responsibilities or authorities of the Departments Defense and Veterans Affairs’ CIOs with regard to their digital service teams. The lack of defined relationships is due, in large part, to the fact that USDS policy on digital service teams does not describe the expected relationship between agency CIOs and these teams. As previously mentioned, USDS policy calls for the digital service team leader to report directly to the head of the agency or its deputy; however, it does not describe the expected responsibilities and authorities governing the relationship of the CIO. Until OMB updates the USDS policy to clearly define the responsibilities and authorities governing the relationships between CIOs and digital service teams and ensures that existing agency digital service team charters or other documentation reflect this policy, agency CIOs may not be effectively involved in the digital service teams. This is inconsistent with long-standing law, as well as the recently enacted FITARA, and OMB’s guidance on CIO responsibilities, and may hinder the ability for CIOs to carry out their responsibilities for IT management of the projects undertaken by the digital service teams. By hiring technology and software development experts and using leading software development practices, both 18F and USDS have provided a variety of useful services to federal agencies. Most surveyed agency project managers that partnered with 18F and USDS were satisfied with the services provided. It is important for USDS and 18F to establish outcome-oriented goals, measure performance, and prioritize projects, particularly since these are valuable management tools that could aid in the transfer of knowledge when critical temporary staff leave these organizations and are replaced. To their credit, both 18F and USDS have developed several outcome- orientated goals and procedures for prioritizing projects. However, the goals and associated performance measures and targets were not always outcome-oriented. Additionally, they have not fully measured program performance. As a result, it will be difficult to hold the programs accountable for results. Moreover, without documented measures and results for USDS, it is unclear whether investments in agency digital service teams are justified. Further, by delaying the date for when it projects to fully recover its costs and not having associated performance measures, 18F is at risk of not having the information necessary for GSA leadership to determine whether to continue using the Acquisition Services Fund for 18F operations. Although OMB has called for agencies to establish digital service teams, USDS policy does not require agencies to define the expected responsibilities and authorities governing the relationships between CIOs and digital service teams. To fulfill their statutory responsibilities, including as most recently enacted in FITARA and reinforced in OMB guidance, and ensure that CIOs have a significant role in the decision making process for projects undertaken by the digital service teams, such defined relationships are essential. To effectively measure 18F’s performance, we recommend that the Administrator of GSA direct the Commissioner for the Technology Transformation Service to take the following two actions: ensure that goals and associated performance measures are outcome-oriented and that performance measures have targets, including performance measures and targets tied to fully recovering program costs; and goals, performance measures, and targets for how the program will achieve its mission after September 2016; and assess actual results for each performance measure. To effectively measure performance, prioritize USDS’s resources, and ensure that CIOs play an integral role in agency digital service teams, we recommend that the Director of the Office of Management and Budget direct the Federal Chief Information Officer to take the following three actions: ensure that all goals and associated performance measures are outcome-oriented and that performance measures have targets; assess actual results for each performance measure; and update USDS policy to clearly define the responsibilities and authorities governing the relationships between CIOs and the digital service teams and require existing agency digital service teams to address this policy. In doing so, the Federal Chief Information Officer should ensure that this policy is aligned with relevant federal law and OMB guidance on CIO responsibilities and authorities. We provided a copy of a draft of this report to GSA, OMB, and 27 agencies to which we did not make recommendations. We received comments from GSA and OMB, stating that they agreed with our recommendations, and from 3 agencies—the Department of Housing and Urban Development, National Science Foundation, and National Archives and Records Administration—describing their plans to establish digital service teams. The remaining 24 agencies stated that they had no comments. The following is a discussion of each agency’s comments. In its written comments, GSA concurred with the two recommendations and described planned actions to address them. The agency also provided technical comments, which we have incorporated in the report as appropriate. GSA’s comments are printed in appendix III. In its written comments, OMB generally concurred with the three recommendations and described planned actions to address them. In a draft of this report, we had included a recommendation to OMB that it establish a time frame for developing the report identifying the highest priority projects, develop the report within that established time frame and on a quarterly basis thereafter, and consider the highest priority IT projects as part of the established process for prioritizing projects. Subsequently, in June 2016 OMB provided a second report identifying the highest priority projects and stated that the next report would be issued by September 2016. Given these actions, we have removed this recommendation from our report. The agency also provided technical comments, which we have incorporated in the report as appropriate. OMB’s comments are reprinted in appendix IV. In written comments, the Department of Housing and Urban Development described activities underway for establishing a digital service team. The department’s comments are reprinted in appendix V. In written comments, the National Archives and Records Administration stated that it plans to establish a digital service team and is currently working with USDS to develop a charter. The agency’s comments are reprinted in appendix VI. In comments provided via e-mail on June 29, 2016, a senior advisor from the National Science Foundation stated that the agency plans to fund a digital service team from its fiscal year 2016 appropriation to focus on transforming its digital services with the greatest impact to citizens and businesses so they are easier to use and more cost- effective to build and maintain. Multiple agencies also provided technical comments, which we have incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Director of the Office of Management and Budget, the Administrator of GSA, the secretaries and agency heads of the departments and agencies addressed in this report, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Our objectives were to (1) describe 18F and U.S. Digital Service (USDS) efforts to identify and address problems with information technology (IT) projects and agencies’ views of services provided, (2) assess these programs’ efforts against practices for performance measurement and project prioritization, and (3) assess agency plans to establish their own digital service teams. In addressing our first objective, we reviewed 32 projects across 18 agencies for which 18F provided services to agencies, and 13 projects at 11 agencies for which USDS provided services. To identify these projects, we obtained the list of 52 completed and ongoing projects for 18F, as of August 2015; and the 17 completed or ongoing projects for USDS, as of August 2015. For the 18F program, we added a project identified by the Nuclear Regulatory Commission that it initiated with 18F in July 2015 but that was not included in General Services Administration’s (GSA) list of 18F projects. We removed 18 projects that did not have agency customers. In addition, we removed 1 project that was terminated without substantial work performed by 18F and 2 projects that, as of March 2016, had not yet been initiated. Regarding USDS, we removed 2 projects that did not use USDS staff (e.g., projects that used staff from 18F or an agency digital service team), and 1 project that did not have an agency customer. We also consolidated 2 projects into 1 project because the customer agency considered them to be a single project. The final 32 18F projects and associated 18 agencies, as well as the final 13 USDS projects and associated 11 agencies are identified in appendix II. We administered a data collection instrument to each of the selected projects about the services they received from 18F and USDS, and the extent to which the projects were associated with major IT investments. We then analyzed information obtained from the completed data collection instruments describing the services they received from 18F and USDS. We also reviewed information obtained from 18F and USDS regarding key projects that did not have agency customers. Additionally, we conducted a web-based survey of the agency managers of selected 18F and USDS projects. We designed a draft questionnaire in close collaboration with our survey specialist. We also conducted pretests with officials at the Environmental Protection Agency, the Office of Management and Budget (OMB), and GSA. From these pretests, we made revisions as necessary to reduce the likelihood of overall and item non-response as well as reporting errors on our questions. We sent the survey via e-mail to the managers of the selected 32 18F and 13 USDS projects from January 12, 2016, through February 18, 2016. Log-in information was e-mailed to all contacts. We contacted project managers by telephone and e-mailed those who had not completed the questionnaire at multiple points during the data collection period. We closed the survey on March 31, 2016. We received a completed questionnaire from the managers of 35 of the 43 selected projects (81 percent)—27 of the 32 selected 18F projects (84 percent) and 10 of the 13 selected USDS projects (77 percent). Because we surveyed all of the project managers and therefore did not conduct any sampling for our survey, our data are not subject to sampling errors. However, the practical difficulties of conducting any survey may introduce non-sampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond to a question can introduce errors into the survey results. We included steps in both the data collection and data analysis stages to minimize such non-sampling errors. Our analysts resolved difficulties that respondents had in completing our survey. Although the survey responses cannot be used to generalize the opinions and satisfaction of all customers that receive services from 18F and USDS programs, the responses provide data for our defined population. In our questionnaire we asked the managers of all projects to identify the extent to which they are satisfied or dissatisfied with the services provided by 18F and USDS programs. To determine the extent to which both programs are providing satisfactory services to its customers, we described the results on a 5-point satisfaction scale, where 5 is “very satisfied” and 1 is “very dissatisfied.” To obtain additional narrative and supporting context, survey respondents were given multiple opportunities to provide additional open-ended comments throughout our survey. Using these open-ended responses, we conducted a content analysis in order to identify common factors. To address the second objective, we reviewed federal laws and guidance on performance measurement, and GAO’s guidance on investment management. We then identified the following practices relevant to entities that provide IT services: Define outcome-oriented goals and measure performance. According to federal law and our previous work, outcome-oriented goals and performance measures are vital to assess the actual results, effects, or impact of a program or activity compared to its mission. Establish and implement procedures for prioritizing IT projects. According to GAO’s guidance on investment management, establishing procedures, to include criteria, for prioritizing projects can help organizations consistently select projects based on their contributions to the strategic goals of the organization. We analyzed 18F and USDS policies, procedures, plans, and practices and compared them to the identified areas. To address our third objective, we administered a data collection instrument on plans to establish digital service teams to the 25 agencies with funding proposed in the President’s Budget for fiscal year 2016. Additionally, we reviewed USDS’s plans—to include interviews with USDS officials—for providing assistance to agencies that planned to establish a digital service team in fiscal year 2016. In addition, we selected four agencies as case studies to determine the extent to which agencies had documented the relationships between digital service teams and agency Chief Information Officers (CIO). To choose these agencies, we identified the three agencies that had established a charter with USDS as of January 2016—the Departments of Defense, Homeland Security, and State. We also selected the Department of Veterans Affairs because, as of January 2016, it had the most staff of any agency digital service team. For these agencies, we evaluated the extent to which agency policies and procedures, including digital service team charters, clearly defined responsibilities and authorities governing the relationships between the CIO and other agency organizations that use IT (in the case of this report, the other agency organizations that use IT were the agency digital service teams). Further, we conducted interviews with the CIOs of the Departments of Defense, Homeland Security, and State, as well as the Veterans Affairs Principal Deputy Assistant Secretary for the Office of Information and Technology. We also shared our analysis with OMB officials to review, comment, and provide additional information. We conducted this performance audit from July 2015 to August 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Between March 2014 and August 2015, the General Services Administration’s (GSA) 18F staff helped 18 agencies with 32 projects, and generally provided services relating to its five business units: Custom Partner Solutions, Products and Platforms, Transformation Services, Acquisition Services, and Learn. In addition, 18F also provided Agency Digital Service Team Candidate Qualification Reviews. Table 8 describes each project, to include the associated agency, project name, project description, and service provided. Between August 2014 and August 2015, USDS provided assistance on 13 projects across 11 agencies. USDS generally provided seven types of consulting services: quality assurance, research, website development, system stabilization, information security assessment, software engineering, and data management. Table 9 describes each project, to include the associated agency, project name, project description, and service provided. In addition to the contact named above, individuals making contributions to this report included Nick Marinos (Assistant Director), Kavita Daitnarayan, Rebecca Eyler, Kaelin Kuhn, Jamelyn Payan, and Tina Torabi.
In an effort to improve IT across the federal government, in March 2014 GSA established 18F, which provides IT services (e.g., develop websites) to agencies. In addition, in August 2014 the Administration established USDS, which aims to improve public-facing federal IT services. The President's Budget for fiscal year 2016 also proposed funding for agencies to establish their own digital service teams. GAO was asked to review 18F and USDS. GAO's objectives were to (1) describe 18F and USDS efforts to address problems with IT projects and agencies' views of services provided, (2) assess these programs' efforts against practices for performance measurement and project prioritization, and (3) assess agency plans to establish their own digital service teams. To do so, GAO reviewed 32 18F projects and 13 USDS projects that were underway or completed as of August 2015 and surveyed agencies about these projects; reviewed 18F and USDS in key performance measurement and project prioritization practices; reviewed 25 agencies' efforts to establish digital service teams; and reviewed documentation from four agencies, which were chosen based on their progress made in establishing digital service teams. The General Service Administration's (GSA) 18F and Office of Management and Budget's (OMB) U.S. Digital Service (USDS) have provided a variety of services to agencies supporting their information technology (IT) efforts. Specifically, 18F staff helped 18 agencies with 32 projects and generally provided development and consulting services, including software development solutions and acquisition consulting. In addition, USDS provided assistance on 13 projects across 11 agencies and generally provided consulting services, including quality assurance, problem identification and recommendations, and software engineering. Further, according to GAO's survey, managers were generally satisfied with the services they received from 18F and USDS on these projects (see table). Source: GAO survey of agency project managers that engaged with 18F and U.S. Digital Service. | GAO-16-602 Both 18F and USDS have partially implemented practices to identify and help agencies address problems with IT projects. Specifically, 18F has developed several outcome-oriented goals and related performance measures, as well as procedures for prioritizing projects; however, not all of its goals are outcome-oriented and it has not yet fully measured program performance. Similarly, USDS has developed goals, but they are not all outcome-oriented and it has established performance measures for only one of its goals. USDS has also measured progress for just one goal. Until 18F and USDS fully implement these practices, it will be difficult to hold the programs accountable for results. Agencies are beginning to establish digital service teams. Of the 25 agencies included in the President's proposed funding for agency digital service teams, OMB has established charters with 6 agencies for their digital service teams. In addition, according to the Deputy USDS Administrator, USDS expects to establish charters with an additional 2 agencies by the end of the fiscal year—the Department of Education and the Small Business Administration. For the remaining 16 agencies, as of April 2016, 8 agencies reported that they plan to establish digital service teams but have yet to establish charters with USDS. The other 9 agencies reported that they do not plan to establish digital service teams by September 2016 and most noted that it was because they did not receive requested funding to do so. Further, of the 4 agencies GAO selected to review, only 1 has defined the relationship between its digital service team and the agency Chief Information Officer (CIO). This is due, in part, to the fact that USDS policy does not describe the expected relationship between CIOs and these teams. Until OMB updates its policy and ensures that the responsibilities between the CIOs and digital services teams are clearly defined, it is unclear whether CIOs will be able to fulfill their statutory responsibilities with respect to IT management of the projects undertaken by the digital service teams. GAO is making two recommendations to GSA and two recommendations to OMB to improve goals and performance measurement. GAO is also recommending that OMB update policy regarding CIOs and digital services teams. GSA and OMB concurred with the recommendations.
Antimicrobial drugs are a broad class of drugs that combat many pathogens, including bacteria, viruses, fungi, or parasites. Antibiotics are a subset of these drugs that work against bacteria. Antibiotics work by killing the bacteria directly or halting their growth. According to WHO, the evolution of strains of bacteria that are resistant to antibiotics is a natural phenomenon that occurs when microorganisms exchange resistant traits; however, WHO also states that the use and misuse of antimicrobial drugs, including antibiotics, accelerates the emergence of resistant strains. Antibiotic resistance began to be recognized soon after penicillin, one of the first antibiotics, came into use over 70 years ago. Antibiotic- resistant bacteria can spread from animals and cause disease in humans through a number of pathways (see fig. 1). The use of antibiotics in animals is an integral part of food animal production. To improve efficiencies, modern industrial farms raise animals in high concentrations, but this practice has the potential to spread disease because animals live in close confinement. Long-term, low-dose treatments with antibiotics may help prevent diseases, particularly where animals are housed in large groups in close confinement facilities, such as concentrated animal feed operations. The concentrated nature of such agricultural operations means that a disease, if it occurs, can spread rapidly and become quickly devastating—increasing the need to rely on antibiotics as a preventive measure. The purposes for which FDA approves the use of antibiotics can be divided into four categories: to treat animals that exhibit clinical signs of disease; to control a disease in a group of animals when a proportion of them exhibit clinical signs of disease; to prevent disease in a group of animals when none are exhibiting clinical signs of disease, but disease is likely to occur in the absence of an antibiotic; or to promote faster weight gain (growth promotion) or weight gain with less feed (feed efficiency). Antibiotics for food animals are administered either by mixing them into feed or water, or by injection and other routes. For example, according to representatives from the poultry industry, the majority of antibiotics used in poultry production are administered through feed and water. In lactating dairy cattle, mastitis—an inflammation of the udder—is the most common reason for antibiotic use and antibiotics are given by injection either to treat or prevent disease, according to representatives from the National Milk Producers Federation. Antibiotics for food animals may be sold or dispensed in several ways, with varying levels of restriction. Some antibiotics may be purchased over-the-counter and used by producers without veterinarian consultation or oversight. Certain antibiotics added to feed must be accompanied by a veterinary feed directive, a type of order for this use. The directive authorizes the producer to obtain and use animal feed containing a certain drug or drug combination to treat the producer’s animals in accordance with the conditions for use approved by FDA. Some antibiotics may require a prescription from a licensed veterinarian. Although veterinarians may prescribe most approved drugs “extra label” (for a species or indication other than those on the drug label), restrictions on the extra-label use of antibiotics in food animals exist. For example, no extra-label use of approved drugs, including antibiotics, is legally permissible in or on animal feed, according to FDA officials. Certain types of drugs, including some types of antibiotics, are prohibited from extra- label use in food animals under any circumstances because the use of these drugs may lead to antibiotic resistance in humans (e.g., fluoroquinolones—broad-spectrum antibiotics that play an important role in treatment of serious bacterial infections, such as hospital-acquired infections). Antibiotics used for food animals can be the same, or belong to the same drug classes, as those used in human medicine. FDA and WHO have sought to identify antibiotics that are used in both animals and humans and that are important to treat human infections—such antibiotics are known as medically important antibiotics. In 2003, FDA issued guidance to industry on the use of antibiotics in food animals, which included a list of antibiotics that it considers important to human medicine. In this guidance, FDA ranked each antibiotic according to its importance in human medicine, as “critically important” (the highest ranking), “highly important,” or “important” based on criteria that focused on antimicrobials, including antibiotics, used to treat foodborne illness in humans. Similarly, WHO developed criteria for ranking antimicrobials, including antibiotics, according to their importance in human medicine and first ranked them in 2005. Two federal departments are primarily responsible for ensuring the safety of the U.S. food supply, including the safe use of antibiotics in food animals—HHS and USDA. Each department contains multiple agencies that contribute to the national effort to control, monitor, and educate others on antibiotic use and resistance. For example, HHS’s CDC and FDA as well as USDA’s APHIS and FSIS have responsibilities related to the White House’s 2015 National Action Plan for Combating Antibiotic- Resistant Bacteria. The plan identifies several goals, including a goal to slow the development of resistant bacteria and prevent the spread of resistant infections as well as a goal to strengthen national “one-health” surveillance efforts to combat resistance, which include collecting data on antibiotic use and resistance. The “one-health” concept recognizes that the health of humans, animals, and the environment are interconnected. Table 1 provides information on selected agencies’ efforts related to antibiotic resistance. To help ensure public health and the safety of the food supply, HHS’s CDC leads investigations of multi-state foodborne illness outbreaks, including those involving antibiotic-resistant pathogens, and collaborates with USDA, FDA, and state public health partners in this effort. To identify an outbreak, CDC monitors data voluntarily reported from state health departments on cases of laboratory-confirmed illness and analyzes these data to identify elevated rates of disease that may indicate an outbreak, according to CDC officials. According to CDC’s website, determining the food source of human illness is an important part of improving food safety. In general, foods often associated with foodborne illnesses include raw foods of animal origin—meat, poultry, eggs, and shellfish, and also unpasteurized (raw) milk—that can cause infections if undercooked or through cross-contamination. Since 2011, HHS has increased veterinary oversight of antibiotics in food animals and, along with USDA, collected additional data on antibiotic use and resistance, but gaps exist in oversight and data collection, and the impact of the agencies’ efforts is unknown. For medically important antibiotics administered in animal feed and water, HHS’s FDA increased veterinary oversight and prohibited certain uses through a combination of guidance and regulation. In addition, agencies in HHS and USDA made several improvements in collecting and reporting data on antibiotic sales, resistance, and use. However, the agencies’ actions do not address oversight gaps such as long-term and open-ended use of medically important antibiotics for disease prevention or collection of farm-specific data, and FDA and APHIS do not have measures to assess the impact of their actions. To promote the judicious use of antibiotics in food animals, FDA increased veterinary oversight of medically important antibiotics in feed and water through voluntary guidance to industry and revising the veterinary feed directive regulation. As a result, as of January 2017, medically important antimicrobials, including antibiotics, in the feed and water of food animals may only be used under the supervision of licensed veterinarians, according to FDA officials (see app. II for a list of these drugs). Voluntary Guidance to Industry. In 2012, FDA finalized guidance that lays out a strategy for phasing out the use of medically important antibiotics for growth promotion or feed efficiency, and for bringing other uses under veterinary oversight. Specifically, in Guidance for Industry #209, FDA outlined and recommended adoption of two principles for judicious use of antibiotics in food animals: (1) limit medically important antibiotics to uses that are considered necessary for assuring animal health, such as to prevent, control, and treat diseases, and (2) limit antibiotic uses to those that include veterinary oversight. In 2013, to help ensure implementation of its strategy, FDA issued Guidance for Industry #213, which asked animal drug companies to voluntarily stop labeling antibiotics for growth promotion or feed efficiency within 3 years. The guidance also recommended more veterinary oversight. Specifically, FDA (1) asked drug companies to voluntarily revise labels of medically important antibiotics to remove the use for growth promotion and feed efficiency; (2) outlined procedures for adding, where appropriate, scientifically supported uses for disease treatment, control, or prevention; and (3) recommended that companies change the means of sale or dispensation from over-the-counter to require veterinary oversight—either through a veterinary feed directive for antimicrobials administered through feed or through a prescription for antimicrobials administered through water—by December 31, 2016. According to FDA, as of January 3, 2017, all applications for medically important antimicrobials, including antibiotics, for use in the feed or water for food animals have been aligned with the judicious use principles as recommended in Guidance for Industry #213, or their approvals have been voluntarily withdrawn. As a result of these actions, these products cannot be used for production purposes (e.g., growth promotion) and may only be used under the authorization of a licensed veterinarian, according to FDA. Agencies Respond to Colistin Resistance In May 2016, the U.S. Department of Defense identified the first person in the United States to be carrying E.coli bacteria with a gene that makes bacteria resistant to colistin. The U.S. Department of Agriculture (USDA) also found colistin-resistant E. coli in samples collected from the intestines of two pigs. According to the U.S. Department of Health and Human Services (HHS), these discoveries are of concern because colistin is used as a last- resort drug to treat patients with multidrug- resistant infections. Finding colistin-resistant bacteria in the United States is important because in 2015 scientists in China first reported that colistin resistance can be transferred across bacteria via a specific gene. HHS and USDA are continuing to search for evidence of colistin-resistant bacteria in the United States through the National Antimicrobial Resistance Monitoring System, according to the HHS website. According to officials from HHS’s Centers for Disease Control and Prevention, the agency is also expanding the capability of public health laboratories to conduct surveillance. Guidance for Industry #213 further defined medically important antimicrobials, including antibiotics, as those listed in FDA’s ranking of drug classes and class-specific products based on importance to human medicine. According to FDA officials, the agency plans to update this list in the near future, and the update will address whether to add or remove drug classes and class-specific products, as well as the need to update the relative rankings of these drug classes and class-specific products. Colistin—an antibiotic used as the last line of medical treatment for certain infections—is not listed in the ranking of drugs and drug classes. However, according to FDA officials, the ranking of a closely related drug (polymixin B) covers colistin’s relative importance to human medicine and colistin has never been marketed for use in animals in the United States. Veterinary Feed Directive Final Rule. In light of the 2013 guidance asking animal drug companies to change the labels of medically important antibiotics to bring them under veterinary oversight (Guidance for Industry #213), in June 2015, FDA issued a final rule revising its existing veterinary feed directive regulation to define minimum requirements for a valid veterinarian-client-patient relationship, among other things. The final rule requires a licensed veterinarian to issue the directive in the context of a valid veterinarian-client-patient relationship as defined by the state where the veterinarian practices medicine or by the federal standard in the absence of an appropriate state standard that applies to veterinary feed directive drugs. There are three key elements of the veterinarian-client-patient relationship: (1) the veterinarian engages with the client (e.g., animal producer) to assume responsibility for making clinical judgments about animal health, (2) the veterinarian has sufficient knowledge of the animal by virtue of an examination and visits to the facility (e.g., farm) where the animal is managed, and (3) the veterinarian provides for any necessary follow-up evaluation or care. The veterinarian is also responsible for ensuring the directive is complete and accurate. For example, the directive must include the approximate number of animals to be fed the medicated feed. The final rule also (1) established a 6-month expiration date for directives unless an expiration date shorter than 6 months is specified in the drug’s approved labeling; (2) limited refills to those listed on the product’s label; and (3) established a 2-year recordkeeping requirement for producers, veterinarians, and feed distributors. Since 2011, agencies within HHS and USDA have made several improvements in collecting and reporting data on antibiotic sales, resistance, and use. In 2014, FDA enhanced its annual summary report on antimicrobials sold or distributed for use in food animals. The enhanced annual report includes additional data tables on the importance of each drug class in human medicine; the approved routes of administration for antibiotics; whether antibiotics are available over-the-counter or require veterinary oversight; and whether the drug products are approved for therapeutic (disease prevention, control, or treatment) purposes, production purposes (e.g., growth promotion), or both therapeutic and production purposes. In 2016, FDA finalized a rule requiring drug companies to report sales and distribution of antimicrobials, including medically important antibiotics approved for use in specific food animals (cattle, swine, and poultry— chickens and turkeys) based on an estimated percentage of total annual sales. According to FDA documents, the additional data will improve FDA’s understanding of how antibiotics are sold or distributed for use in food animals and help the agency further target its efforts to ensure judicious use of medically important antibiotics. Before the rule was finalized, however, some organizations cautioned that the proposed requirement for drug companies to submit species-specific estimates of antibiotic product sales and distribution for use in food animal species would not result in useful data, in part, because sales are not a proxy for antibiotic use. FDA’s action partially addressed our 2011 recommendation to provide sales data by food animal group and indication for use. Federal agencies have made several improvements to the National Antimicrobial Resistance Monitoring System—the national public health surveillance system that tracks changes in the antibiotic susceptibility of bacteria found in ill people, retail meats, and food animals. Specifically, beginning in 2013, FSIS collected random samples from animal intestines at slaughter plants, including chickens, turkeys, swine, and cattle, in addition to non-random sampling under its regulatory program. In 2013, FDA also expanded its retail meat sampling to collect data from laboratories in three new states: Louisiana, Missouri, and Washington. This increased the number of states from 11 to 14. In addition, FDA increased retail meat samples from 6,700 in 2015 to 13,400 in 2016 by requiring the 14 participating laboratories to double the amount of food samples purchased and tested. In 2017, FDA plans to add another five states (Iowa, Kansas, South Carolina, South Dakota, and Texas) to retail meat testing, which will raise the total retail meat samples to more than 17,000 annually, according to FDA officials. FSIS and FDA actions addressed our recommendation from 2011 to modify slaughter and retail meat sampling to make the data more representative of antibiotic resistance in bacteria in food animals and retail meat throughout the United States. Figure 2 summarizes the data collected through the National Antimicrobial Resistance Monitoring System. Since 2011, FDA in collaboration with USDA’s Agricultural Research Service has also initiated pilot projects to explore antibiotic-resistant bacteria on the farm and at slaughter for each major food animal group (swine, beef and dairy cattle, chickens, and turkeys). The purpose of the pilot projects was (1) to begin assessing similarities and differences between bacteria and antibiotic resistance on the farm and at the slaughter plant and (2) to determine the feasibility and value of surveillance on farms as a possible new element of the National Antimicrobial Resistance Monitoring System, including the collection of antibiotic use information from farms in a confidential manner. To collect data from farms, federal agencies collaborated with academia to obtain data from producers. According to FDA officials, USDA can use information from the pilot projects to determine options for examining antibiotic resistance in a group of food animals over time (e.g., longitudinal on-farm studies). In 2016, for the first time, CDC, FDA, and USDA published the National Antimicrobial Resistance Monitoring System report with data from whole genome sequencing—cutting-edge technology which characterizes an organism’s (individual bacterium) complete set of genes. According to FDA officials, this represents a very significant advancement in surveillance that will provide definitive information about the genes causing resistance, including resistance compounds not currently fingerprinted, along with details on other important features of a bacterium. In addition, new reporting tools are being deployed to foster timely data sharing via web tools and they allow stakeholders to explore isolate-level antibiotic-resistance data in new ways. For example, in August 2015, FDA made available on its website 18 years of National Antimicrobial Resistance Monitoring System isolate-level data on bacteria. Since 2011, USDA agencies have collected additional antibiotic use data through national surveys of producers and engaged in efforts to leverage industry data. In particular, APHIS, through the National Animal Health Monitoring System, collected additional antibiotic use data through its national survey of producers of dairy cattle (2011 and 2014), beef cattle (2011), laying hens (2013), and swine (2012). Using these surveys, generally APHIS collects information on the amount and duration of antibiotic use; reason for use; antibiotic name; and the route of administration, such as feed, water, and injection; among other things. APHIS also may collect biological samples from animals and test these samples for antibiotic resistance of foodborne pathogens; producers receive results of biological sample testing. According to APHIS officials, the agency is planning to collect data annually on antibiotic use on swine farms and beef cattle feedlots using similar surveys, with additional questions on stewardship and judicious use of antibiotics. USDA’s Economic Research Service and National Agricultural Statistics Service also conducted national surveys of producers of swine (2015) and chicken (2011) to collect data on farm finances and production practices, including antibiotic use. The surveys were components of the annual Agricultural Resource Management Survey, which is primarily focused on farm finances, commodity costs of production, and farm production practices. The surveys captured quantitative information on the extent of antibiotic use and the types of farms that use antibiotics for growth promotion and prevention. USDA has used these data to estimate the impact of antibiotic use on production outcomes. Furthermore, APHIS provided input on a survey as part of the poultry industry effort begun in 2015 to develop a survey to collect farm-specific data. Representatives from the poultry industry told us that they plan to share aggregated survey data with APHIS and FDA when the data collection and report are finalized. Despite agencies’ enhanced oversight and data collection efforts, several gaps exist in the oversight of medically important antibiotics in food animals—specifically, antibiotics with no defined duration of use on their labels and antibiotics administered by routes other than feed and water (e.g., injection). Moreover, gaps that we identified in 2011 in farm-specific data on antibiotic use and resistance in bacteria persist. FDA’s guidance to industry has improved oversight of some antibiotics, but it does not address long-term and open-ended use of medically important antibiotics for disease prevention because some antibiotics do not have defined durations of use on their labels. For example, some currently approved labels do not have defined duration of use such as “feed continuously for 5 days”; instead labels may read “feed continuously,” according to FDA officials. In September 2016, FDA issued a notice in the Federal Register seeking public comment on how to establish appropriately targeted durations of use for medically important antimicrobial drugs including the approximately 32 percent of therapeutic antibiotic products affected by Guidance for Industry #213 with no defined duration of use. FDA officials told us the agency will consider public comments as it develops a process for animal drug companies to establish appropriate durations of use for labels already in use. However, FDA has yet to develop this process, including time frames for implementation. In an October 2016 report, one stakeholder organization recommended that FDA announce a plan and timeline for making all label revision changes regarding duration limits and other aspects of appropriate use as quickly as possible to ensure labels follow the judicious use of antibiotics in food animals. Under federal standards for internal control, management should define objectives clearly to enable the identification of risk and define risk tolerances; for example, in defining objectives, management may clearly define what is to be achieved, who is to achieve it, how it will be achieved, and the time frames for achievement. Without developing a process, which may include time frames, to establish appropriate durations of use on labels of all medically important antibiotics, FDA will not know whether it is achieving its objective of ensuring judicious use of medically important antibiotics in food animals. FDA’s Guidance for Industry #213 also does not recommend veterinary oversight of over-the-counter medically important antibiotics administered in injections or through other routes besides feed and water (e.g., tablets). According to FDA officials, the agency focused first on antibiotics administered in feed and water because officials believed these antibiotics represent the majority of antibiotics sold and distributed and therefore they posed a higher risk to human health. According to FDA’s 2014 sales data report on antimicrobials, approximately 5 percent of medically important antibiotics are sold for use in other routes. Representatives of two veterinary organizations we interviewed support veterinary oversight of medically important antibiotics administered by other routes such as injections. In October 2016, FDA officials told us the agency is developing a plan that outlines its key activities over the next 5 years to further support antimicrobial stewardship in veterinary settings, including addressing veterinary oversight of other dosage forms of medically important antibiotics. According to FDA officials, the agency intended to publish the plan by the end of 2016 and to initiate steps by the end of fiscal year 2019. However, FDA was unable to provide us with this plan or specifics about the steps outlined in the plan because it was still under development. In the interim, on January 3, 2017, FDA broadly outlined on its website its key initiatives to support antimicrobial stewardship in veterinary settings, but it does not provide enough detail to know if steps will be established to increase veterinary oversight of medically important antibiotics administered in routes other than feed and water. As previously discussed, under federal standards for internal control, management should define objectives clearly to enable the identification of risk and define risks tolerances; for example, in defining objectives, management may clearly define what is to be achieved and the time frames for achievement, among other things. Without a published plan documenting the steps to increase veterinary oversight of medically important antibiotics administered through routes other than feed and water, such as injections and tablets, FDA will not know whether it is making progress in achieving its objective of ensuring judicious use of medically important antibiotics in food animals. Stakeholders we spoke with also identified and reported other potential gaps in FDA’s actions to increase veterinary oversight, such as (1) gaps in oversight of antibiotics used for disease prevention and (2) gaps in some producers’ knowledge of FDA’s actions and in their access to veterinarians. Representatives of consumer advocacy organizations told us the use of antibiotics for disease prevention in food animals is a problem because it promotes the routine use of antibiotics in healthy food animals. According to FDA documents, the agency believes that the use of antibiotics for disease prevention is necessary to assure the health of food animals and that such use should be appropriately targeted to animals at risk for a specific disease. Some producers and companies have already taken steps to eliminate the use of medically important antibiotics in food animals, including uses for disease prevention. For example, we interviewed representatives from companies (restaurant and producers) that sell meat and poultry products with “no antibiotic use” label claims, denoting products from animals raised without the use of any antibiotics or medically important antibiotics, even for disease prevention (see app. III for more information on companies’ efforts). In 2016, the Farm Foundation summarized findings from 12 workshops on FDA’s actions and one of the findings was that small- and medium-sized producers did not have sufficient knowledge about FDA’s actions to increase veterinary oversight of medically important antibiotics. In addition, some producers may lack access to veterinarians. In 2015, FDA announced the availability of a guidance document in the form of answers to questions about veterinary feed directive final rule implementation to help small businesses, including producers, comply with the revised regulation. According to FDA officials, the agency continues to respond to questions from stakeholders regarding the use of medically important antimicrobials, including antibiotics, in food animals and has planned numerous outreach activities in 2017. Gaps in farm-specific data on antibiotic use and resistance in food animals persist since we last reported on this in 2011. Agencies are making efforts to address these gaps, but they are doing so without a joint plan, as we previously recommended. A joint plan is necessary to further assess the relationship between antibiotic use and resistance in bacteria, and it could help ensure efficient use of resources in a time of budget constraints. In 2004 and 2011, we found numerous gaps in farm-specific data stemming from limitations in the data collected by the agencies. In this review, we found that the limitations we identified in 2011 remain, and that data gaps have not been fully addressed. For example, according to CDC officials, there are still critical gaps in antibiotic use data, including the amount and specific types of antibiotics used across the various food animals and the indications for their use; these data are needed to further assess the relationship between antibiotic use and resistance in bacteria. Moreover, these data are important for assessing the impact of actions being implemented by FDA to foster the judicious use of medically important antimicrobial drugs, including the use of antibiotics in food animals, according to FDA officials. Table 2 shows limitations in federal efforts to collect farm-specific data on antibiotic use and resistance in bacteria in food animals. HHS and USDA are making individual efforts to gather additional data on antibiotic use and resistance at the farm level, but officials stated that they face funding constraints. For example, in 2014, APHIS proposed initiatives as part of USDA’s plan to improve collection of antibiotic use and resistance data on farms, including enhancements to two on-farm surveys and the initiation of longitudinal on-farm studies to collect data across time on antibiotic use, antibiotic resistance in bacteria, and management practices. According to USDA’s fiscal year 2016 budget summary and annual performance plan, the President’s budget included a $10 million increase for APHIS’ contribution to the government-wide initiative to address antimicrobial resistance. APHIS would have used the increased funding to implement the farm-specific data collection initiatives, according to APHIS officials. However, according to USDA’s Office of Inspector General, the funding was not approved. As noted above, in 2016 APHIS developed study designs for the two proposed on- farm surveys for antibiotic use on cattle feedlots and at swine operations, but the agency has not collected data because, according to USDA, additional funding has not been secured. In March 2016, USDA’s Office of Inspector General found inadequate collaboration in USDA’s budget process to request funds for antibiotic resistance efforts and recommended that the Agricultural Research Service, FSIS, and APHIS work together to establish antibiotic resistance priorities related to budget requirements that also communicate agency interdependency. Subsequently, APHIS collaborated with FSIS and the Agricultural Research Service in developing its fiscal year 2017 budget request to increase the likelihood of receiving funding. Similarly, according to the fiscal year 2016 HHS’s FDA justification of estimates for appropriations committees, the President requested a funding increase of $7.1 million for FDA to achieve its antibiotic stewardship goals, including collection of data related to the use of antibiotics in food animals. According to the Presidential Advisory Council on Combating Antibiotic-Resistant Bacteria, however, FDA did not receive those funds. According to FDA, using existing fiscal year 2016 funds, in March 2016, the agency made some progress in data collection and issued a request for proposals to collect antibiotic use and resistance data on farms. In August 2016, FDA entered into two cooperative agreements with researchers for antibiotic use and resistance data collection; the awardees will develop a methodology to collect detailed information on antibiotic drug use in one or more of the major food animal groups (cattle, swine, chickens, and turkeys), according to FDA officials. The data collection efforts are expected to provide important information on data collection methodologies to help optimize long-term strategies for collecting and reporting such data, according to FDA officials. Moreover, FDA, CDC, and USDA formed a working group and proposed an analytic framework to associate foodborne bacteria resistance with antibiotic use in food animals. However, the agencies are conducting these efforts without a joint data collection plan, thus risking inefficient use of their limited resources. In 2004, we recommended that HHS and USDA jointly develop and implement a plan for collecting data on antibiotic use in food animals. In addition, in 2011, we recommended that HHS and USDA identify potential approaches for collecting detailed data on antibiotic use in food animals, collaborate with industry to select the best approach, seek any resources necessary to implement the approach, and use the data to assess the effectiveness of policies to curb antibiotic resistance. HHS and USDA generally agreed with our recommendations but have still not developed a joint plan or selected the best approach for collecting these data. HHS and USDA officials told us they are continuing to make progress towards developing a joint data collection plan but that funding has been an impediment. In September 2015, FDA, CDC, and USDA agencies, including APHIS, held a jointly sponsored public meeting to present current data collection efforts and obtain public input on possible approaches for collecting additional farm-specific antibiotic use and resistance data. In June 2016, FDA stated that it is collaborating with USDA and CDC to develop the data collection plan and is still reviewing September 2015 public comments on data collection; however, the continued lack of funding will significantly impact the ability to move forward with a plan, according to FDA, APHIS, and CDC officials. The White House’s 2015 National Action Plan for Combating Antibiotic- Resistant Bacteria calls for agencies to strengthen one-health surveillance through enhanced monitoring of antibiotic-resistance patterns, as well as antibiotic sales, usage, and management practices, at multiple points in the production chain for food animals and retail meat. Moreover, in the 1-year update on the National Action Plan, the President’s task force recommended that federal agencies coordinate with each other to ensure maximum synergy, avoidance of duplication, and coverage of key issues. It is unclear whether FDA, CDC, and APHIS will develop a joint plan to collect antibiotic use and resistance data at the farm level and whether agencies’ individual current data collection efforts are coordinated to ensure the best use of resources. We continue to believe that developing a joint plan for collecting data to further assess the relationship between antibiotic use and resistance in bacteria at the farm level is essential and will help maximize resources and reduce the risk of duplicating efforts at a time when resources are constrained. FSIS has developed a performance measure to assess the impact of its actions to manage the use of antibiotics in food animals, but FDA and APHIS have not done so. The GPRA Modernization Act of 2010 requires federal agencies such as HHS and USDA to develop and report performance information—specifically, performance goals, measures, milestones, and planned actions. We have previously found that these requirements can also serve as leading practices for planning at lower levels (e.g., FDA and APHIS) within agencies; moreover, developing goals and performance measures can help an organization balance competing priorities, particularly if resources are constrained, and help an agency assess progress toward intended results. Numerical targets are a key attribute of performance measures because they allow managers to compare planned performance with actual results. In this context, FSIS’s performance measure, included in its fiscal year 2017-2021 strategic plan, relates to sampling of antibiotic-resistant bacteria. Specifically, the performance measure is the percentage of FSIS slaughter meat and poultry samples that will undergo whole genome sequencing, including antibiotic-resistance testing, to assess the impact of the agency’s surveillance of antibiotic-resistant bacteria in slaughtered food animals. FDA and APHIS officials agree that performance measures are needed to assess the impact of their actions to manage the use of antibiotics in food animals. According to the White House’s 2015 National Action Plan for Combating Antibiotic-Resistant Bacteria, metrics should be established and implemented to foster stewardship of antibiotics in food animals within 3 years. FDA has a goal to enhance the safety and effectiveness of antibiotics and an objective to reduce risks in antibiotics by supporting efforts to foster the judicious use of medically important antibiotics in food animals. FDA’s actions to achieve this objective include developing voluntary guidance to industry and revising its veterinary feed directive regulation, as noted above. However, FDA does not yet have performance measures to assess the impact of these actions in achieving its goal and objective even though its revised regulation has already been implemented and actions recommended in its guidance were implemented as of January 2017. FDA officials told us the agency is taking steps to develop performance measures. In July 2016, FDA began reaching out to APHIS and producer groups to collaboratively develop metrics, according to FDA and APHIS officials. Furthermore, according to agency officials, FDA is collecting data in a pilot program for the veterinary feed directive to establish a baseline for compliance, which is needed to develop a measure. FDA officials told us developing measures is a challenge without funding to support farm-specific data to assess changes in antibiotic use practices and adherence to its guidance documents. It is unclear when FDA’s efforts to develop performance measures will be completed. Without developing performance measures and targets for its actions, FDA cannot assess the impact of its guidance to industry and its revised regulation in meeting the goal of enhancing the safety and effectiveness of antibiotics by fostering the judicious use of medically important antibiotics in food animals. Similar to FDA, APHIS does not have performance measures to assess the impact of its antibiotic use and resistance data collection efforts. In March 2016, APHIS agreed to develop goals and identify measures for its antibiotic resistance efforts by March 2017 as recommended by the USDA Office of Inspector General. However, little progress has been made. According to APHIS officials, if the agency does not receive new funding in fiscal year 2017 for antibiotic use and resistance activities, development of related goals and measures will be delayed. According to USDA’s 2012 report on antibiotic resistance, few useful metrics (i.e., performance measures) exist for gauging progress toward stated data collection goals. The report also stated that having defined metrics available would allow more appropriately focused efforts for monitoring antibiotic use and resistance and allow greater “buy in” among stakeholder groups for the monitoring efforts and their resulting actions. APHIS officials told us that performance measures and targets are needed and in July 2016, the agency began discussions with FDA and others about developing metrics, as noted above. Without developing performance measures and targets for its actions, APHIS cannot assess the impact of collecting farm-specific data on antibiotic use and resistance in meeting its goal to protect agricultural resources through surveillance for antibiotic-resistant bacteria. To manage the use of antibiotics in food animals and combat the emergence and spread of antibiotic-resistant bacteria, the Netherlands, Canada, Denmark and the EU have taken actions to strengthen the oversight of veterinarians’ and producers’ use of antibiotics and to collect farm-specific data. In addition, the Netherlands and Denmark have set targets for reducing the use of antibiotics, and the EU has called for measurable goals and indicators for antimicrobial use and resistance. To strengthen oversight and collect farm-specific data on antibiotic use in food animals, the Netherlands primarily relied on a public-private partnership, whereas Canada, Denmark, and the EU relied on government policies and regulations. After taking these actions, the use or sales (depending how the data were reported) of antibiotics for food animals decreased in Denmark, the Netherlands, and the EU, and data collection on antibiotic use improved in all three countries and the EU. Beginning in 2008, the Netherland’s food animal (cattle, veal, chicken, and swine) industries, national veterinary association, and government developed a public-private partnership to strengthen oversight of veterinarians’ prescriptions and producers’ use of antibiotics. This partnership was also used to collect farm-specific data. Government officials we interviewed from the Ministries of Health and Economic Affairs told us that in the past the Netherlands was one of the highest users of antibiotics in food animals in Europe. As a result of the partnership’s actions, from 2009 through 2015, antibiotic sales fell by over 50 percent, according to government documents. As part of the partnership, industry strengthened oversight of producers’ use of antibiotics through quality assurance programs—producer education and certification programs that set standards for animal production including the use of antibiotics—and the national veterinary association established additional guidelines and policies for veterinarians. According to the Ministry of Economic Affairs, building on these actions, the government adopted new statutes and regulations that incorporated some of the oversight activities that industry and veterinary organizations had established, such as restricting the use of antibiotics that are important to human health, implementing herd health plans, and developing prudent use guidelines. Similar to the Netherlands, U.S. producers and veterinarians participate in quality assurance programs and take action to promote judicious use of antibiotics, according to documents we reviewed from U.S. industry and veterinarian organizations. For example, some producers in the United States stopped the use of antibiotics for growth promotion prior to U.S. government action. The public-private partnership in the Netherlands also established a process for the continuous collection of farm-specific antibiotic use data. Specifically, in 2011, the different food animal industries and veterinary organizations leveraged their existing processes and infrastructure to create one centralized database for veterinarians and producers to report antibiotic prescriptions and use. In contrast, the United States relies primarily on an on-farm survey to collect antibiotic use data on a specific food animal every 5 to 7 years, as noted above. In 2010, the Netherlands’ government, food animal industries, and national veterinary association jointly financed an independent entity, the Netherlands Veterinary Medicines Authority, to analyze antibiotic use data and veterinary prescription patterns to produce annual antibiotic use reports, according to Dutch government documents. Representatives from the independent entity told us that the Netherlands’ government funds 50 percent of the cost and the food animal industries and veterinarians fund the remaining 50 percent. The Netherlands Veterinary Medicines Authority uses the data submitted by producers and veterinarians to define annual benchmarks regarding both the quantity and the types of antibiotics used within each sector. The industries use this information to monitor producers’ antibiotic use and veterinarians’ prescriptions, and they work with individuals who exceeded the benchmark to reduce use. According to Dutch government documents and officials, anonymized and aggregated data—including the amounts of antibiotics given, types of antibiotics, and number of animals that each veterinarian oversees—are shared with government for a variety of purposes, such as annual reports and other studies. Additionally, in 2016 the Netherlands Veterinary Medicines Authority published a report finding that reductions in antimicrobial usage, including antibiotics, were associated with reductions in the prevalence of antimicrobial-resistant E.coli in fecal samples from veal, calves, pigs, and young chickens. Dutch government officials told us that moving forward a variety of issues must be addressed, including overuse of antibiotics by veterinarians and producers—for example, in the veal and cattle sectors, which are challenged in decreasing antibiotics while keeping animals healthy. Similarly, a representative from a veterinary organization told us that under the new policies, veterinarians are challenged with greater administrative and record-keeping burdens. The Netherlands’ collaboration with industry is similar to some actions taken in the United States, such as the U.S. poultry industry’s effort to develop an on-farm antibiotic use survey and its plan to share aggregate survey data with APHIS and FDA, as discussed above. Additionally, FDA is actively engaging stakeholders to leverage public-private partnerships and collaboration to collect farm-specific data, according to FDA officials. However, the United States has no practice comparable to benchmarking. According to APHIS officials, benchmarking and measuring producers’ use and veterinarians’ prescriptions of antibiotics would require major infrastructure and technological investments for data capture, analysis, and reporting, and for educating producers and veterinarians regarding use of the data. According to representatives from an animal health company, it may not be feasible for the United States to adopt practices from the Netherlands because it would require similar or equal veterinary practice laws across all states. The Canadian government is working toward integrating federal and province-level policies on antibiotic use and collects farm-specific antibiotic use and resistance data on some species. The 2015 Canadian national action plan on antibiotic use and resistance calls for integration of federal-level and province-level policies and lists specific activities along with completion dates. Officials we interviewed from a Canadian food safety agency told us that Canada is developing a framework to align national and province-level veterinary oversight efforts and increase collaboration between these levels of government. Additionally, officials from a Canadian agency that regulates medical products told us that the federal government is working on a policy initiative to increase veterinary oversight over all medically important antimicrobials used in food animal production and that, as part of this initiative, they are working with provinces to ensure the streamlined transition of over-the-counter medically important antibiotics to prescription status. The national action plan also identifies the need for continued government support of industry-led quality assurance programs that address judicious use of antibiotics in food animals. For example, the Chicken Farmers of Canada’s On-Farm Food Safety Assurance program requires producers to keep records, called flock sheets, on each chicken flock. These sheets capture information related to animal health, including any antibiotics given to the bird during production, and must be presented prior to slaughtering. This differs from the United States where the poultry industry is vertically integrated—meaning that individual poultry companies own or contract for all phases of production and processing. Because of this integration, flock health information and production practices in the United States, including antibiotics used in feed or administered by a veterinarian, are maintained by the poultry company and not individual farmers. The national action plan also states that Canada is working toward removing growth promotion claims on antibiotics labels, similar to the U.S. approach, and that the pharmaceutical industry has voluntarily committed to comply by December 2016. According to one Canadian government official, data on antibiotic use in food animals have improved in recent years as a result of refinements to antibiotic sales data as well as farm-specific monitoring of antibiotic use in chickens, which has allowed officials to observe a relationship between changes in antibiotic use and resistance. For example, current data from the Canadian Integrated Program for Antimicrobial Resistance Surveillance show changes in resistant bacteria, isolated from chickens, associated with an intervention led by the poultry industry that focused on reducing the preventative use of a type of antibiotic called cephalosporin, according to Canadian government documents. According to an official from the Canadian Integrated Program for Antimicrobial Resistance Surveillance that we interviewed, the Canadian system is similar to the National Antimicrobial Resistance Monitoring System in the United States; however, unlike the U.S. system, the Canadian system has a farm surveillance component that captures information on antibiotic use, antibiotic resistance, and farm characteristics. The 2013 annual report from the Canadian Integrated Program for Antimicrobial Resistance Surveillance states that Canada initiated this surveillance component in a sample of farms in five major pork-producing provinces and in four major poultry-producing provinces in 2006 and 2013, respectively. In 2014, a total of 95 swine farms and 143 chicken farms participated in this voluntary program, according to the most recent (2014) annual report. The Canadian government compensates veterinarians to collect samples and gather data from each participating farms, according to a Canadian government official. Representatives from a veterinary organization we interviewed told us that surveillance data are good for looking at trends but that such data are limited and not appropriate to determine whether a producer is misusing antibiotics. One representative of the swine industry similarly told us that data collected from sample pig farms are limited and, to be more statistically representative of the industry, should be broadened to be more geographically representative and cover all types of pig production. While the Canadian farm surveillance program does not currently monitor antibiotic use and resistance in beef cattle on farms, the Canadian beef industry has funded research to develop an on-farm data collection framework and would welcome the addition of farm-specific antibiotic use and resistance surveillance to the program, according to representatives from a Canadian beef industry group we interviewed. Similar to the Canadian farm surveillance program, United States producers voluntarily participate in periodic surveys to provide antibiotic use data at the farm level, as part of National Animal Health Monitoring System; however, no U.S. program conducts longitudinal studies to collect data across time on antibiotic use, as noted above. Since we reported on Denmark’s actions to regulate antibiotic use in 2011, Denmark has developed a variety of policies focused on both producers’ and veterinarians’ use of antibiotics and has continued to monitor levels of antibiotic use, according to Danish government documents we reviewed and officials we interviewed. For example, officials from the Danish Veterinary and Food Administration explained that in 2013, they implemented a tax on the sale of antimicrobials, including antibiotics, and other drugs used in veterinary medicine. They told us that the initiative aims to strengthen veterinarians’ and producers’ incentive to choose alternatives to antimicrobial, including antibiotic, treatment or to choose the most responsible antimicrobial or antibiotic treatment—using antibiotics judiciously. One Danish industry representative told us that it is yet to be determined if the tax will be effective in reducing use, and that a high tax may lead to the illegal import of antibiotics. Officials from the Danish Veterinary and Food Administration also explained that other actions since 2011 include the introduction of legislation in 2014 on the treatment of swine herds. They stated that when veterinarians prescribe antibiotics to be administered through feed or water for respiratory or gastrointestinal infections, veterinarians must take samples from the herd for laboratory testing to verify the clinical diagnosis. Officials from the Danish Veterinary and Food Administration also indicated that Denmark has leveraged voluntary industry initiatives to manage the use of antibiotics, such as the cattle industry’s ban on the use of an antibiotic deemed critically important to human medicine. Denmark continues to collect farm-specific antibiotic use data through veterinary prescriptions and reports results along with resistance data annually via the Danish Integrated Antimicrobial Resistance Monitoring and Research Program, according to Danish government documents and officials. The most recent report states that antibiotic consumption was 47 percent lower in 2015 than in 1994 and decreased slightly from 2014 through 2015. As we previously reported, the lower levels of antibiotic beginning after 1994 coincide with changes to government policies on growth promotion and veterinarians’ sales profits. Representatives of U.S. industry and veterinary organizations we interviewed questioned whether the actions taken by Denmark were successful. They said while antibiotic use decreased, Denmark experienced issues with animal welfare, such as greater levels of disease, and increased the use of antibiotics for disease treatment. Danish officials acknowledged the concerns for animal welfare associated with reductions in antibiotic use, but documents they provided stated that they have not seen any evidence of decreased animal welfare or increases in infection prevalence. Representatives from a U.S. food industry organization and a veterinary organization told us that actions taken by Denmark are not feasible in the United States because of differences between the countries. For example, the food production industries in Denmark are different in size and production volume when compared with those in the United States, according to representatives from the U.S. poultry industry. Since 2011, when we last reported on the EU’s efforts, the EU has developed an antibiotic-resistance action plan, reported reductions in sales of antibiotics, and made associations between antibiotic use and resistance in a new report. The EU action plan calls for various actions to strengthen judicious use, oversight, and surveillance of antibiotics. According to EU documents, steps taken to implement the plan include, publishing guidelines for prudent use of antibiotics in veterinary medicine in 2015, enacting an animal health law in March 2016 that emphasizes prevention of disease rather than cure, and revising legislation for veterinary medicinal products and for medicated feed. In 2011, we reported on EU efforts to collect sales data; at that time only nine European countries had submitted data. For the 2016 report on EU sales, 29 European countries had submitted data, and the data show that from 2011 to 2014 sales of antibiotics for use in animals fell by approximately 2 percent in 25 European countries. One difference between the United States and the EU is the classification of certain antimicrobials, including antibiotics, in sales reports; for example, in the EU a group of medications called ionophores are not included in antimicrobial sales reports, but in the United States ionophores are included. According to EU documents we reviewed, other actions since 2011 include activities to promote the collection of on farm data, mainly through developing guidance and a pilot project. For example, a report from the European Medicines Agency, an agency within the EU, describes a trial conducted in 2014 to test a protocol and template for data collection on antimicrobial use in pigs. The report states that based on results from the trial the agency is preparing guidance, including a protocol and template, for member states on antibiotic use data collection. Additionally, the EU agency began a pilot study to collect antibiotic use data from twenty pig farms per country, but there was insufficient support among member states to continue the study, according to EU documents. Officials from the European Medicines Agency told us that the pilot project underscored the challenges in collecting farm-specific data which include producer confidentiality and resource constraints. However, these officials also told us that they have limited access to farm-specific data from certain countries, including Denmark, the Netherlands, and Norway. The EU also took steps to compare surveillance data on antibiotic use and resistance in pathogens in humans, food, animals, and environment. Specifically, in 2015 three EU agencies published the first integrated analysis report that found a positive association between the use of certain antibiotics in food animals and resistance in humans. For example, the report cited that a positive association was observed between fluoroquinolone resistance in E. coli from humans and the total consumption in animals. The report also explains that the agencies analyzed existing data from five separate monitoring systems, including sales data, to create the integrated report. In the United States, no such comparisons in surveillance reports have been made, in part because antibiotic use data are limited, as previously discussed. The Netherlands and Denmark set antibiotic use reduction targets to help manage the use of antibiotics in food animals. According to government officials in both countries, the targets were a critical component of their strategies to reduce antibiotics use. The Netherlands and Denmark used reduction targets to measure the progress and impact of actions taken, and as existing targets are reached these countries continue to set new targets. Similarly, the EU outlined its next steps for combating antibiotic resistance in a June 2016 document that calls for measureable goals that lead to reductions in infections in humans and animals and reductions in antibiotic use and resistance, among other things. U.S. federal officials and representatives of industry and veterinary organizations whom we interviewed questioned the usefulness of setting antibiotic use reduction targets in the United States, in part, because targets may reduce animal welfare. The Netherlands policy on reducing antibiotic use, implemented through the public-private partnership discussed above, set the following reduction targets on antibiotics used in food animals: 20 percent reduction in the sales of all antibiotics used in food animal production by 2011, 50 percent by 2013, and 70 percent by 2015. According to Dutch government officials, the first two targets were met and exceeded, but the 70-percent reduction by 2015 was not met; a 58-percent reduction was achieved from 2009 through 2015, according to government documents. Indicators used to measure the policy’s impact included antibiotic use and resistance levels in swine, mortality of swine, and veterinary cost per swine. According to a Dutch industry representative, to reduce the use of antibiotics, food animal industries optimized feed, housing, vaccines, and hygiene (see fig. 3). In a June 2015 letter to parliament, government officials proposed the Netherlands approach to antibiotic resistance for 2015 through 2019, which includes taking additional action to achieve the 70-percent reduction goal and developing species-specific measures and reduction targets. Representatives from veterinary and industry organizations in the Netherlands told us that setting targets has proven to be effective, but that there is concern that further reductions may pose some risk to animal health and welfare. For example, piglets may be at risk of premature death if certain antibiotics are prohibited or fewer antibiotics are used, according to Dutch veterinary and industry representatives. Representatives of veterinary and producer organizations we spoke with in the United States expressed similar concerns that reductions in antibiotic use may compromise animal health and welfare. In 2011, we reported on Denmark’s Yellow Card initiative, which set regulatory limits on antibiotic use and subjected pig producers exceeding limits to increased monitoring by government officials. The goal of the Yellow Card initiative was to achieve a 10-percent reduction in antibiotic use by 2013 from 2009 levels. According to government officials, the goal was met and exceeded. In 2016, Denmark expanded the Yellow Card initiative in pigs to focus more on antibiotics that are important for human health. It also developed an action plan to address methicillin-resistant Staphylococcus aureus (MRSA). Included in this plan is a new target of a 15 percent reduction in antibiotic use in swine by 2018. According to a representative from a Danish industry organization that represents producers across many food animal production sectors, producers who used antibiotics below the permitted levels began increasing their use to the maximum amounts allowed, and the new reduction target is a response to these increases. The representative also told us that reduction targets are critical because they place the responsibility for reduction on the producer or farmer—the person who determines what farm practices are implemented—and that reducing antibiotic use and setting reduction targets must be done with involvement of producers and veterinarians because the need for antibiotics varies across animals. For example, dairy cattle in different age groups use varying amounts of antibiotics, and setting one target may put the more susceptible age group at greater risk of infection or death, according to industry officials. In addition to the government targets, industry set its own targets to reduce the use of antibiotics. For example, the dairy and beef cattle industries set a target in 2014 to reduce use by 20 percent by 2018. Some U.S. officials and stakeholders question the benefits of antibiotic use targets and reductions in Denmark because while antibiotic use was reduced, changes in resistance are less clear. Representatives from the U.S. swine industry told us that targets based on volume of antibiotics used do not take into account the potency of the antibiotics, and that a mandatory reduction target could take antibiotic use in an unfavorable direction, such as a shift from veterinarians and producers using older drugs that are less potent, to using drugs that are more potent, newer, or important to human health. In 2016, the EU Council published a statement of its conclusions on the next steps for its member states to combat antimicrobial resistance including setting goals and targets. The statement calls for EU member states to have a one-health action plan by 2017 with measureable goals, qualitative or quantitative, that lead to reduction in infections in humans and animals, reductions in antimicrobial use and resistance, and prudent antimicrobial use. The statement also calls for EU officials and member states to jointly develop a new EU action plan on antimicrobial resistance, indicators to assess the progress made on addressing antibiotic resistance, and indicators to assess progress in implementing the new action plan. EU officials told us that the EU is seeking to develop indicators that are easy to measure, are not too costly, and can be applied across its member states. Representatives of U.S. industry and veterinary organizations we interviewed stated that they would support measures and targets that focus on compliance with judicious use policies, but not on reductions. CDC, APHIS, and FSIS officials told us they have not conducted on-farm investigations during outbreaks from foodborne illness including those from antibiotic-resistant pathogens in animal products. Moreover, there is no consensus about when an on-farm investigation is needed. In 2014, recognizing the importance of the one-health concept (health of humans, animals, and the environment are interconnected) FSIS and APHIS created a memorandum of understanding and standard operating procedures for APHIS to investigate the root cause of foodborne illness outbreaks, given APHIS’s regular interactions with producers on farms and expertise in veterinary epidemiology. Under the memorandum of understanding, APHIS will conduct epidemiological investigations—which includes examining the spread of disease by time, place, and animal as well as the mode of transmission and source of entry of disease—to determine the root cause of foodborne illness, which may be related to factors at the farm level, according to FSIS officials. Such investigations can be used to identify on-farm risk factors for disease occurrence or spread that might be controlled or mitigated by some intervention in current or future situations. For multistate foodborne illness outbreaks, CDC is to identify the outbreak and lead the investigation by determining the DNA fingerprint of the bacteria that cause the outbreak as well as whether or not the bacteria is resistant to any antibiotics. According to CDC officials, with increasing use of whole genome sequencing—an advanced technique to fingerprint bacteria—federal agencies may prioritize foodborne outbreak investigations from antibiotic-resistant bacteria because they can identify these outbreaks sooner. CDC is to coordinate with state health departments and FSIS if a meat or poultry product is implicated (see fig. 4 for more information on the investigation process for multistate foodborne illness outbreaks). However, APHIS and FSIS did not conduct on-farm investigations in response to a multistate foodborne illness outbreak in 2015 involving an antibiotic-resistant strain of Salmonella in roaster pigs, the first attempt to use the 2014 memorandum of understanding. We determined this is because stakeholders—industry, state agencies, and federal agencies— did not agree on whether on-farm investigations were needed as part of the 2015 outbreak investigation. Specifically, FSIS, the pork industry, and a state agriculture agency agreed that the slaughter plant was the source of the outbreak, negating the need for an on-farm investigation in their view, while state public health agencies wanted on-farm investigations to determine whether the pigs from the five farms supplying the slaughter plant were carriers of the outbreak strain and to identify the slaughter plants that received the pigs. CDC and APHIS deferred to FSIS on whether an on-farm investigation was needed. According to FSIS officials, the outbreak was attributed to conditions and practices at the slaughter plant and the company implemented extensive corrective actions at the plant in response to the 2015 outbreak. However, in July 2016, FSIS issued a public health alert because of concerns about illnesses from another outbreak linked to the Salmonella strain from the 2015 outbreak involving whole roaster pigs; the same slaughter plant was implicated in the 2016 outbreak. CDC officials told us that resistance for this specific strain of Salmonella has increased for a variety of drugs and that an on- farm investigation would have been useful in the original outbreak to explore whether the outbreak strain was present in pigs while they were still on the farm. FSIS and the Washington State Department of Health investigated the 2016 outbreak, but no on-farm investigations were conducted. The implicated slaughter plant recalled products and the outbreak ended, according to Washington state officials. As of October 2016, FSIS and APHIS were continuing discussions and making plans on how best to address the need to enhance understanding of this Salmonella strain in live pigs, especially how to identify on-farm interventions that may prevent future illness, according to FSIS officials. APHIS and FSIS officials told us that deciding when to conduct investigations on the farm is complex. First, the memorandum of understanding requires producer’s consent to conduct an on-farm investigation. The memorandum of understanding outlines the need for producer’s consent, in part, because neither APHIS nor FSIS has authority to access farms during foodborne illness outbreaks without the cooperation of the producer. APHIS will contact the producer or company involved to discuss the specifics of an investigation and to gain voluntary participation in any investigation. CDC has authority to take actions to prevent the interstate spread of communicable diseases, which, according to CDC legal officials, would include diseases originating on farms that may relate to foodborne illness from antibiotic- resistant pathogens. Specifically, CDC has authority to take measures in the event of inadequate state or local control to prevent interstate communicable disease spread. To the extent that CDC would use this authority, CDC would generally work with APHIS and FSIS on issues relevant to their expertise, according to CDC officials. Second, deciding whether an outbreak is likely due to on-farm risk factors versus ones that are largely the result of in-plant problems is difficult because every outbreak is unique, according to FSIS officials. FSIS is less likely to request APHIS assistance if there is evidence of insanitary conditions—a condition in which edible meat and poultry products may become contaminated or unsafe—at the slaughter plant. However, the APHIS and FSIS memorandum of understanding does not include a decision-making framework to determine the need for an on-farm investigation; instead it focuses on the procedures for and division of responsibilities in assessing the root cause of an outbreak. In contrast, APHIS uses a decision matrix when determining whether it will pursue epidemiological assessments on the farm during other types of investigations, such as investigations of animal disease outbreaks. According to FSIS Directive 8080.3, the objectives of foodborne illness investigation include identifying contributing factors to the foodborne illness, including outbreaks, and recommending actions or new policies to prevent future occurrences. The White House’s 2015 National Action Plan for Combating Antibiotic-Resistant Bacteria includes a 3-year milestone for USDA to begin coordinated investigations of emerging antibiotic-resistant pathogens on the farm and at slaughter plants under the one-health surveillance goal. The objective for this milestone emphasizes coordination among federal agencies, producers, and other stakeholders. Coordination with the stakeholders who have the authority and who control access to the farm could help APHIS and FSIS fully investigate an outbreak. Specifically, CDC has authority to cooperate with and assist state and local governments with epidemiologic investigations and to take actions to prevent the spread of communicable diseases in the event of inadequate local control, including diseases originating on farms. In addition, involving stakeholders from industry and state departments of agriculture could increase the likelihood of obtaining producers’ consent to on-farm investigations. Developing a framework for deciding when on-farm investigations are warranted during outbreaks, in coordination with CDC and other stakeholders, would help APHIS and FSIS identify factors that contribute to or cause foodborne illness outbreaks, including those from antibiotic-resistant pathogens in animal products. Ensuring the continued effectiveness of antibiotics, particularly those used in human medicine, is critical because the rise of antibiotic-resistant bacteria poses a global threat to public health. Since 2011, HHS and USDA agencies have taken actions to increase veterinary oversight of medically important antibiotics used in the feed and water of food animals and to collect more detailed antibiotic sales, use, and resistance data. However, these actions do not address long-term and open-ended use of medically important antibiotics because some antibiotics do not have defined durations of use on their labels. Without developing a process to establish appropriate durations of use on labels of all medically important antibiotics, FDA will not know whether it is ensuring judicious use of medically important antibiotics in food animals. In addition, FDA officials told us the agency is developing a plan that outlines its key activities over the next 5 years to further support antimicrobial stewardship in veterinary settings, including steps to bring the use of medically important antibiotics administered in other dosage forms (not feed or water) under veterinary oversight. However, FDA was unable to provide us with this plan or provide specifics about the steps outlined in the plan because it was still under development. A published plan with steps is critical to guide FDA’s efforts in ensuring the judicious use of medically important antibiotics in food animals. HHS and USDA agencies continue to move forward with data collection activities including new initiatives, but data gaps remain. For more than a decade, we have reported on the need for HHS and USDA to work together to obtain more detailed farm-specific data on antibiotic use and resistance to address the risk of antibiotic resistance. In 2004, we recommended that HHS and USDA jointly develop and implement a plan for collecting data on antibiotic use in food animals that would support understanding the relationship between use and resistance, among other things. In 2011, we again recommended that HHS and USDA identify approaches for collecting detailed data on antibiotic use to assess the effectiveness of policies to curb antibiotic resistance, among other things. Although HHS and USDA agreed with these recommendations, they have not developed a joint plan to collect such data. We continue to believe that developing a joint plan for collecting data to further assess the relationship between antibiotic use and resistance at the farm level is essential and will help maximize resources and reduce the risk of duplicating efforts at a time when resources are constrained. To assess the impact of agency actions to manage the use of antibiotics in food animals, FSIS finalized a performance measure, but FDA and APHIS have not developed any such measures or related targets, which is not consistent with leading practices for federal strategic planning and performance measurement. Without developing performance measures and targets for their actions, FDA and APHIS cannot assess impacts of their efforts to manage the use antibiotics in food animals. In addition, although APHIS and FSIS established a memorandum of understanding in 2014 to assess the root cause of foodborne illness outbreaks, the memorandum does not include a decision-making framework for determining when on-farm investigations are needed. In the first use of the memorandum in a 2015 outbreak, there was no consensus among stakeholders on when such investigations were needed. Developing a framework for deciding when on-farm investigations are warranted during outbreaks, in coordination with CDC and other stakeholders, would help APHIS and FSIS identify factors that contribute to or cause foodborne illness outbreaks, including those from antibiotic-resistant pathogens in animal products. The Secretary of Health and Human Services should direct the Commissioner of FDA to take the following three actions: Develop a process, which may include time frames, to establish appropriate durations of use on labels of all medically important antibiotics used in food animals. Establish steps to increase veterinary oversight of medically important antibiotics administered in routes other than feed and water, such as injections and tablets. Develop performance measures and targets for actions to manage the use of antibiotics such as revising the veterinary feed directive and developing guidance documents on judicious use. The Secretary of Agriculture should take the following three actions: Direct the Administrator of APHIS to develop performance measures and targets for collecting farm-specific data on antibiotic use in food animals and antibiotic-resistant bacteria in food animals. Direct the Administrator of APHIS and the Administrator of FSIS to work with the Director of CDC to develop a framework for deciding when on-farm investigations are warranted during outbreaks. We provided a draft of this report to the Secretaries of Agriculture and Health and Human Services for review and comment. USDA and HHS provided written comments, reproduced in appendixes IV and V, respectively. USDA agreed with our recommendations. The department stated that it will develop performance measures and targets for collecting farm-specific data on antibiotic use in farm animals and antibiotic- resistant bacteria. USDA also agreed that a decision matrix to support multi-agency cooperation and to determine when on farm investigations are warranted, could be a useful addition, and noted that it has similar matrices that can serve as a model for antimicrobial resistance investigations. HHS neither agreed nor disagreed with our recommendations. USDA and HHS also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Agriculture, the Secretary of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This report (1) examines actions the U.S. Department of Health and Human Services (HHS) and U.S. Department of Agriculture (USDA) have taken since 2011 to manage the use of antibiotics in food animals and to assess the impact of their actions, (2) identifies actions that selected countries and the European Union (EU) have taken to manage the use of antibiotics in food animals, and (3) examines the extent to which HHS and USDA have conducted on-farm investigations of outbreaks of foodborne illness from antibiotic-resistant pathogens in animal products. To examine actions HHS and USDA have taken since 2011 to manage the use of antibiotics in food animals and to assess the impact of their actions, we reviewed relevant statutes and regulations, agencies’ plans and guidance, and stakeholders’ reports related to managing the use of antibiotics in food animals. We also reviewed USDA’s Office of Inspector General report on USDA’s actions to manage the use of antibiotics in food animals. We reviewed federal data reports on antibiotic sales, use, and resistance and asked officials about the quality of these data. Based on these steps, we determined that the data were sufficiently reliable for our purpose of illustrating actions taken to improve data collection. We compared information from federal agencies about actions taken to manage the use of antibiotics with federal standards for internal controls. We also reviewed public comments submitted to HHS regarding data collection on farms and changes to the Animal Drug User Fee Act. We interviewed federal officials and representatives of stakeholder organizations about federal actions taken to manage the use of antibiotics since 2011. These stakeholder organizations, represented national food animal industries (National Chicken Council, National Turkey Federation, U.S. Poultry and Egg Association, National Pork Producers Council, National Pork Board, and National Milk Producers Federation); veterinarians (American Association of Avian Pathologists, American Association of Bovine Practitioners, American Association of Swine Veterinarians, and American Veterinary Medicine Association); the pharmaceutical industry (Animal Health Institute and Zoetis); consumer advocates (Keep Antibiotics Working, National Resource Defense Council, and Center for Science in the Public Interest); and others (Cattle Empire, American Feed Industry Association, Farm Foundation, and Pew Charitable Trusts). In addition, we interviewed representatives of several companies (producers and restaurants) that provide food products from animals raised without antibiotics to obtain a better understanding of production practices; the types of antibiotic use data available at the farm level; and perspectives on federal efforts to educate producers about antibiotics. The views of representatives we spoke with are not generalizable to other companies. In addition, we compared federal agencies’ actions with relevant goals outlined in the 2015 National Action Plan for Combating Antibiotic-Resistant Bacteria and interviewed representatives of stakeholder organizations to obtain views on agencies’ efforts taken to date. To examine agencies’ efforts to assess the impact of their actions, we reviewed HHS and USDA agencies’ strategic plans and we identified any relevant goals, measures, and targets developed by federal agencies. We compared the measures and targets with agencies’ goals, National Action Plan goals and milestones, and leading practices for improving agency performance—specifically, practices identified in the GPRA Modernization Act of 2010 and our prior work on performance management. To identify actions that selected countries and the EU have taken to manage the use of antibiotics in food animals since 2011, we reviewed documents, statutes, regulations, published studies, and surveillance reports regarding animal antibiotic use and resistance in Canada, Denmark, the Netherlands, and the EU. We selected these countries and this region because they have taken actions to mitigate antibiotic resistance by managing the use of antibiotics in food animals. Additionally, each country and region met at least one of the following criteria: (1) have food animal production practices similar to those of the United States (Canada); (2) have taken actions over the last 10 years to manage the use of antibiotics in food animals (the EU and Denmark); and (3) have novel practices to manage the use of antibiotics in food animals (the Netherlands). Moreover, Denmark and the Netherlands are EU members that have made changes beyond EU directives to manage the use of antibiotics in food animals. We interviewed government officials either in person or by phone from Health Canada, the Public Health Agency of Canada, Agriculture and Agri-Food Canada, the Canadian Food Inspection Agency, and the Office of the Auditor General of Canada; the Danish Veterinary and Food Administration; the Netherlands Ministry of Health, Welfare and Sport, the Netherlands Ministry of Economic Affairs and Netherlands Food and Consumer Product Safety Authority; and the European Union Directorate General for Health and Food Safety and the European Medicines Agency. Additionally, we visited a swine facility in the Netherlands to learn about production practices. We also interviewed representatives of the Netherlands Veterinary Medicines Authority, an independent agency that monitors the use of antibiotics in food animals, defines antibiotic use benchmarks, and reports on antibiotic use trends, among other things. Finally, we interviewed representatives from veterinary and food animal industry organizations in the United States, Canada, Denmark, and the Netherlands; a U.S. organization that represents pharmaceutical companies that manufacture animal health products; as well as researchers in the field. We did not independently verify statements made about the EU practices or about the selected countries’ statutes and regulations. We reviewed the methodologies of the studies provided to us and found them reasonable for presenting examples of the selected countries and the EU efforts. To examine the extent to which HHS and USDA conducted on-farm investigations of outbreaks of foodborne illness from antibiotic-resistant pathogens in animal products, we reviewed HHS’s Centers for Disease Control and Prevention and USDA’s Animal and Plant Health Service (APHIS) and Food Safety and Inspection Service (FSIS) documentation, including directives, relevant to investigations of foodborne illness outbreaks, as well as the 2014 APHIS-FSIS memorandum of understanding and corresponding standard operating procedures to access farms for investigations during such outbreaks. We also reviewed documentation on a 2015 Salmonella outbreak that we identified as the only outbreak in which APHIS and FSIS used their memorandum of understanding. We interviewed federal and state officials (Washington and Montana) who investigated the 2015 outbreak. We also interviewed federal officials about the agencies’ authority to conduct on-farm investigations during foodborne illness outbreaks, including those involving antibiotic-resistant pathogens. We conducted this performance audit from August 2015 to March 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As of January 2017, medically important antimicrobials, including antibiotics, identified by the U.S. Department of Health and Human Services’ Food and Drug Administration (FDA) may only be used in the feed and water of food animals under the supervision of licensed veterinarians, according to FDA officials. Table 3 shows the antibiotics which changed dispensing status to require veterinary oversight. Some companies that sell meat and poultry products have taken steps to eliminate or reduce the use of antibiotics in food animals and label products coming from these animals with claims related to “no antibiotic use.” We interviewed representatives of six such companies—specifically, three producers and three restaurants. Representatives of four of the six companies—three producers and one restaurant—told us that consumer demand was one of the main reasons why their companies took action to reduce or eliminate the use of antibiotics in food animals, and representatives of the two other companies—both restaurants—stated that their companies took action for reasons related to human and animal health. As part of their efforts, companies implemented various on-farm practices, such as changing animal housing and using alternatives to antibiotics. For example, according to one company representative, the company provided larger housing to reduce crowding and promoted the use of probiotics to improve animal health. Representatives told us that their companies seek to ensure animal welfare and will use antibiotics to treat sick animals; however, these animals are removed from the product line and sold as conventional products. Representatives of these companies also shared challenges they face in raising animals and selling food animal products without antibiotics. For example, one producer told us there is a lack of antibiotic alternatives, and that drug companies do not always produce alternatives for all species of food animals. Restaurant representatives with whom we spoke said that a challenge in providing meat and poultry products from animals raised without antibiotics is that supply is limited; for example, companies only buy certain parts of the animal, but the supplier needs to sell all parts, which may limit the availability of suppliers willing to specialize in animals raised without antibiotics. Additionally, company representatives told us that it is more difficult for pork and beef producers than poultry producers to raise animals without antibiotics because the supply chain for poultry is vertically integrated—meaning that the same company generally owns the animal from birth through processing—but the supply chains for pork and beef are not. The companies we interviewed use various terms for their label claim related to no antibiotic use, such as “no antibiotics ever,” “no human antibiotics,” “raised without antibiotics,” and “raised without antibiotics important to human health.” To include these or similar claims on their product labels, companies must submit to the U.S. Department of Agriculture’s (USDA) Food Safety and Inspection Service (FSIS) detailed records from the production process that support the accuracy of the claim. All company representatives we interviewed told us their companies collect and report data related to the production practices for their products. For example, one company requires its suppliers to report quarterly on antimicrobials used and the reason for use. Another representative told us that the company collects numerous data points throughout the year, including all medicines used on the farm and feed history, to validate antibiotic use compliance by its suppliers with company policies. Company representatives we spoke with agreed that there is some confusion among consumers regarding products sold and marketed as being from animals raised without antibiotics. One company representative told us that consumers are unaware that antibiotic use claims refer to animal raising practices rather than the presence of antibiotics in food products and that all meat and poultry products are tested when presented for slaughter to ensure antibiotic residues are below allowable government limits. Under its National Residue Program, FSIS monitors meat, poultry, and processed egg products for chemical residues, including antibiotics. Additionally, the Food and Drug Administration requires, as a condition of use on the product label, withdrawal periods for antibiotics—that is, periods of time prior to slaughter when antibiotics cannot be used. Another company representative told us that there is confusion about the various marketing claims used by companies, such as “no hormones” and “no antibiotics.” FSIS officials told us that the agency is aware of the concerns industry and consumers may have regarding the various claims on products currently in the marketplace. In September 2016, FSIS released labeling guidance that provides information about claims frequently used on products, what they mean, and how they are evaluated for accuracy. In regard to label claims related to antibiotic use, the guidance describes the requirements needed to make a claim, provides examples of terms that may be used, and lists the documentation needed for approval of the claim. FSIS is also considering rulemaking to define and clarify the varied language used in the “raised without antibiotics” claim, according to officials. Companies may choose to further differentiate their products in the marketplace through participating in certification, audit, or other programs, such as USDA’s National Organic Program or Process Verified Program. Products may carry the USDA organic seal if companies and their products are certified by a USDA certifying agent to be in accordance with USDA organic regulations, which include not treating animals with antibiotics. Similarly, a company may use the process verified seal on their products if one or more of their agricultural processes, such as raising animals without antibiotics, is verified through an audit by USDA. Unlike the National Organic Program, under the Process Verified Program companies establish their own processes and standards. As a result, processes and standards may vary across the companies. In addition, the constraints on antibiotic use do not need to meet statutory or regulatory requirements, leading to differing standards. For example, one company may have a process verified program for no antibiotics ever, and another may have a program for no antibiotics important to human health. Representatives from five of the six companies we spoke with told us that for some products they participate in USDA’s Process Verified Program to verify antibiotic use claims. In addition to the contact named above, Mary Denigan-Macauley (Assistant Director), Nkenge Gibson, Cynthia Norris, Benjamin Sclafani, and Bryant Torres made significant contributions to the report. Also contributing to the report in their areas of expertise were Kevin Bray, Gary Brown, Robert Copeland, Michele Fejfar, Benjamin Licht, Sushil Sharma, and Sara Sullivan.
According to the World Health Organization, antibiotic resistance is one of the biggest threats to global health. CDC estimates antibiotic-resistant bacteria cause at least 2 million human illnesses in the United States each year, and there is strong evidence that some resistance in bacteria is caused by antibiotic use in food animals (cattle, poultry, and swine). HHS and USDA are primarily responsible for ensuring food safety, including safe use of antibiotics in food animals. In 2011, GAO reported on antibiotic use and recommended addressing gaps in data collection. GAO was asked to update this information. This report (1) examines actions HHS and USDA have taken to manage use of antibiotics in food animals and assess the impact of their actions, (2) identifies actions selected countries and the EU have taken to manage use of antibiotics in food animals, and (3) examines the extent to which HHS and USDA conducted on-farm investigations of foodborne illness outbreaks from antibiotic-resistant bacteria in animal products. GAO reviewed documents and interviewed officials and stakeholders. GAO selected three countries and the EU for review because they have taken actions to mitigate antibiotic resistance. Since 2011, when GAO last reported on this issue, the Department of Health and Human Services (HHS) has increased veterinary oversight of antibiotics and, with the Department of Agriculture (USDA), has made several improvements in collecting data on antibiotic use in food animals and resistance in bacteria. For example, HHS's Food and Drug Administration (FDA) issued a regulation and guidance for industry recommending changes to drug labels. However, oversight gaps still exist. For example, changes to drug labels do not address long-term and open-ended use of antibiotics for disease prevention because some antibiotics do not define duration of use on their labels. FDA officials told GAO they are seeking public comments on establishing durations of use on labels, but FDA has not clearly defined objectives for closing this gap, which is inconsistent with federal internal control standards. Without doing so, FDA will not know whether it is ensuring judicious use of antibiotics. Moreover, gaps in farm-specific data on antibiotic use and resistance that GAO found in 2011 remain. GAO continues to believe HHS and USDA need to implement a joint on-farm data collection plan as previously recommended. In addition, FDA and USDA's Animal and Plant Health Inspection Service (APHIS) do not have metrics to assess the impact of actions they have taken, which is inconsistent with leading practices for performance measurement. Without metrics, FDA and APHIS cannot assess the effects of actions taken to manage the use of antibiotics. Three selected countries and the European Union (EU), which GAO reviewed, have taken various actions to manage use of antibiotics in food animals, including strengthening oversight of veterinarians' and producers' use of antibiotics, collecting farm-specific data, and setting targets to reduce antibiotic use. The Netherlands has primarily relied on a public-private partnership, whereas Canada, Denmark, and the EU have relied on government policies and regulations to strengthen oversight and collect farm-specific data. Since taking these actions, the use or sales of antibiotics in food animals decreased and data collection improved, according to foreign officials and data reports GAO reviewed. Still, some U.S. federal officials and stakeholders believe that similar U.S. actions are not feasible because of production differences and other factors. HHS and USDA officials said they have not conducted on-farm investigations during foodborne illness outbreaks including those from antibiotic-resistant bacteria in animal products. In 2014, USDA agencies established a memorandum of understanding to assess the root cause of foodborne illness outbreaks. However, in 2015 in the agencies' first use of the memorandum, there was no consensus among stakeholders on whether to conduct foodborne illness investigations on farms and the memorandum does not include a framework to make this determination, similar to a decision matrix used in other investigations. According to a directive issued by USDA's Food Safety and Inspection Service, foodborne illness investigations shall include identifying contributing factors and recommending actions or new policies to prevent future occurrences. Developing a framework, in coordination with HHS's Centers for Disease Control and Prevention (CDC) and other stakeholders, would help USDA identify factors that contribute to or cause foodborne illness outbreaks, including those from antibiotic-resistant bacteria in animal products. GAO is making six recommendations, including that HHS address oversight gaps, HHS and USDA develop metrics for assessing progress in achieving goals, and USDA develop a framework with HHS to decide when to conduct on-farm investigations. USDA agreed and HHS neither agreed nor disagreed with GAO's recommendations.
IRIS was created in 1985 to help EPA develop consensus opinions within the agency about the health effects of chronic exposure to chemicals. Its importance has increased over time as EPA program offices and the states have increasingly relied on IRIS information in making environmental protection decisions. Currently, the IRIS database contains assessments of more than 540 chemicals. According to EPA, national and international users access the IRIS database approximately 9 million times a year. EPA’s Assistant Administrator for the Office of Research and Development has described IRIS as the premier national and international source for qualitative and quantitative chemical risk information; other federal agencies have noted that IRIS data are widely accepted by all levels of government across the country for application of public health policy, providing benefits such as uniform, standardized methods for toxicology testing and risk assessment, as well as uniform toxicity values. Similarly, a private-sector risk assessment expert has stated that the IRIS database has become the most important source of regulatory toxicity values for use across EPA’s programs and is also widely used across state programs and internationally. Historically and currently, the focus of IRIS toxicity assessments has been on the potential health effects of long-term (chronic) exposure to chemicals. According to OMB, EPA is the only federal agency that develops qualitative and quantitative assessments of both cancer and noncancer risks of exposure to chemicals, and EPA does so largely under the IRIS program. Other federal agencies develop quantitative estimates of noncancer effects or qualitative cancer assessments of exposure to chemicals in the environment. While these latter assessments provide information on the effects of long-term exposures to chemicals, they provide only qualitative assessments of cancer risks (known human carcinogen, likely human carcinogen, etc.) and not quantitative estimates of cancer potency, which are required to conduct quantitative risk assessments. EPA’s IRIS assessment process has undergone a number of formal and informal changes during the past several years. While the process used to develop IRIS chemical assessments includes numerous individual steps, or activities, major assessment steps include (1) a review of the scientific literature; (2) preparation of a draft IRIS assessment; (3) internal EPA reviews of draft assessments; (4) two OMB/interagency reviews, managed by OMB, that provide input from OMB as well as from other federal agencies, including those that may be affected by the IRIS assessments if they lead to regulatory or other actions; (5) an independent peer review conducted by a panel of experts; and (6) the completion of a final assessment that is posted to the IRIS Web site. Unlike many other EPA programs that have statutory requirements, including specific time frames for completing mandated tasks, the IRIS program is not subject to statutory requirements or time frames. In contrast, the Department of Human Health and Services’ Agency for Toxic Substances and Disease Registry (ATSDR), which develops quantitative estimates of the noncancer effects of exposures to chemicals in the environment, is statutorily required to complete its assessments within certain time frames. The IRIS database is at serious risk of becoming obsolete because the agency has not been able to routinely complete timely, credible assessments or decrease a backlog of 70 ongoing assessments. Specifically, although EPA has taken important steps to improve the IRIS program and productivity since 2000 and has developed a number of draft assessments for external review, its efforts to finalize the assessments have been thwarted by a combination of factors including the imposition of external requirements, the growing complexity and scope of risk assessments, and certain EPA management decisions. In addition, the changes to the IRIS assessment process that EPA was considering at the time of our review would have added to the already unacceptable level of delays in completing IRIS assessments and further limited the credibility of the assessments. EPA has taken a number of steps to help ensure that IRIS contains current, credible chemical risk information; to address its backlog of ongoing assessments; and to respond to new OMB requirements. However, to date, these changes—including increasing funding, centralizing staff conducting assessments, and revising the assessment process—have not enabled EPA to routinely complete credible IRIS assessments or decrease the backlog. That is, although EPA sent 32 draft assessments for external review in fiscal years 2006 and 2007, the agency finalized only 4 IRIS assessments during this time (see fig. 2). Several key factors have contributed to EPA’s inability to achieve a level of productivity that is needed to sustain the IRIS program and database: new OMB-required reviews of IRIS draft assessments by OMB and other federal agencies; the growing complexity and scope of risk assessments; certain EPA management decisions and issues, including delaying completion of some assessments to await new research or to develop enhanced analyses of uncertainty in the assessments; and the compounding effect of delays. Regarding the last factor, even a single delay in the assessment process can lead to the need to essentially repeat the assessment process to take into account changes in science and methodologies. A variety of delays have impacted the majority of the 70 assessments being conducted as of December 2007—48 had been in process for more than 5 years, and 12 of those for more than 9 years. These time frames are problematic because of the substantial rework such cases often require to take into account changing science and methodologies before they can be completed. For example, EPA’s assessment of the cancer risks stemming from exposure to naphthalene—a chemical used in jet fuel and in the production of widely used commercial products such as moth balls, dyes, insecticides, and plasticizers—was nearing completion in 2006. However, prior to finalizing this assessment, which had been ongoing for over 4 years, EPA decided that the existing noncancer assessment had become outdated and essentially restarted the assessment to include both cancer and noncancer effects. As a result, 6 years after the naphthalene assessment began, it is now back at the drafting stage. The assessment now will need to reflect relevant research completed since the draft underwent initial external peer review in 2004, and it will have to undergo all of the IRIS assessment steps again, including the additional internal and external reviews that are now required (see app. I). Further, because EPA staff time continues to be dedicated to completing assessments in the backlog, EPA’s ability to both keep the more than 540 existing assessments up to date and initiate new assessments is limited. Importantly, EPA program offices and state and local entities have requested assessments of hundreds of chemicals not yet in IRIS, and EPA data as of 2003 indicated that the assessments of 287 chemicals in the database could be outdated—that is, new information could change the risk estimates currently in IRIS or enable EPA to develop additional risk estimates for chemicals in the database (for example, developing a cancer potency estimate for assessments with only noncancer estimates). In addition, because EPA’s 2003 data are now more than 4 years old, it is likely that more assessments may be outdated now. The consequences of not having current, credible IRIS information can be significant. EPA’s inability to complete its assessment of formaldehyde, which the agency initiated in 1997 to update information already in IRIS on the chemical, has had a significant impact on EPA’s air toxics program. Although in 2003 and 2004, the National Cancer Institute and the National Institute of Occupational Safety and Health (NIOSH) had released updates to major epidemiological studies of industrial workers that showed a relationship between formaldehyde and certain cancers, including leukemia, EPA did not move forward to finalize an IRIS assessment incorporating these important data. Instead, EPA opted to await the results of another update to the National Cancer Institute study. While this additional research was originally estimated to take, at most, 18 months to complete, at the time of our report (more than 3 years later) the update was not complete. In the absence of this information, EPA’s Office of Air and Radiation decided to use risk information developed by an industry- funded organization—the CIIT Centers for Health Research—for a national emissions standard. This decision was a factor in EPA’s exempting certain facilities with formaldehyde emissions from the national emissions standard. The CIIT risk estimate indicates formaldehyde’s potency at about 2,400 times lower than the estimate in IRIS that was being re-evaluated and that did not yet consider the 2003 and 2004 National Cancer Institute and NIOSH epidemiological studies. According to an EPA official, an IRIS cancer risk factor based on the 2003 and 2004 National Cancer Institute and NIOSH studies would likely be close to the current IRIS assessment, which EPA has been re-evaluating since 1997. The discrepancy between these two risk estimates raises concerns about whether the public health is adequately protected in the absence of current IRIS information. For example, in 1999, EPA published a national assessment that provided information about the types and amounts of air toxics to which people are exposed. The assessment, which also used the CIIT risk estimate for formaldehyde, concluded, for example, that formaldehyde did not contribute significantly to the overall cancer risk in the state of New Jersey. However, in carrying out its own risk assessment on formaldehyde, the New Jersey Department of Environmental Protection opted to use the risk information that is currently in IRIS (dating back to 1991) and found that the contribution from formaldehyde to overall cancer risk in New Jersey is quite significant, second only to diesel particulate matter. (App. I provides additional information on EPA’s IRIS assessment for formaldehyde.) One of the factors that has contributed to EPA’s inability to complete assessments in a timely manner—the new OMB-directed OMB/interagency review process—also limits the credibility of the assessments because it lacks transparency. Specifically, neither the comments nor the changes EPA makes to the scientific IRIS assessments in response to the comments made by OMB and other federal agencies, including those whose workload and resource levels could be affected by the assessments, are disclosed. In addition, the OMB/interagency reviews have hindered EPA’s ability to independently manage its IRIS assessments. For example, without communicating its rationale for doing so, OMB directed EPA to terminate five IRIS assessments that for the first time addressed acute, rather than chronic exposure—even though EPA initiated this type of assessment to help it implement the Clean Air Act. For our March 2008 report, we reviewed the additional assessment process changes EPA was planning and concluded that they would likely exacerbate delays in completing IRIS assessments and further affect their credibility. Specifically, despite the OMB/interagency review process that OMB required EPA to incorporate into the IRIS assessment process in 2005, certain federal agencies continued to believe they should have greater and more formal roles in EPA’s development of IRIS assessments. Consequently, EPA had been working for several years to establish a formal IRIS assessment process that would further expand the role of federal agencies in the process—including agencies such as DOD, which could be affected by the outcome of IRIS assessments. For example, some of these agencies and their contractors could face increased cleanup costs and other legal liabilities if EPA issued an IRIS assessment for a chemical that resulted in a decision to regulate the chemical to protect the public. In addition, the agencies could be required to, for example, redesign systems and processes to eliminate hazardous materials; develop material substitutes; and improve personal protective clothing, equipment, and procedures. Under the changes that EPA was planning at the time of our review, these potentially affected agencies would have the opportunity to be involved, or provide some form of input, at almost every step of EPA’s IRIS assessment process. Most significantly, the changes would have provided federal agencies, including those facing potential regulatory liability, with several opportunities during the IRIS assessment process to subject particular chemicals of interest to additional process steps. These additional process steps, which would have lengthened assessment times considerably, include giving federal agencies and the public 45 days to identify additional information on a chemical for EPA’s consideration in its assessment or to correct any errors on an additional assessment draft that would provide qualitative information; giving potentially affected federal agencies 30 days to review the public comments EPA received and initiate a meeting with EPA if they want to discuss a particular set of comments; allowing potentially affected federal agencies to have assessments suspended for up to 18 months to fill a data gap or eliminate an uncertainty factor that EPA plans to use in its assessment; and allowing other federal agencies to weigh in on (1) the level of independent peer review that would be sought (that is, whether the peer reviews would be conducted by EPA Science Advisory Board panels, National Academies’ panels, or panels organized by an EPA contractor); (2) the areas of scientific expertise needed on the panel; and (3) the scope of the peer reviews and the specific issues they would address. EPA estimated that assessments that undergo these additional process steps would take up to 6 years to complete. While it is important to ensure that assessments consider the best science, EPA has acknowledged that waiting for new data can result in substantial harm to human health, safety, and the environment. Further, although coordination with other federal agencies about IRIS assessments could enhance their quality, increasing the role of agencies that may be affected by IRIS assessments in the process itself reduces the credibility of the assessments if that expanded role is not transparent. In this regard, while EPA’s proposed changes would have allowed for including federal agencies’ comments in the public record, the implementation of this proposal was delayed for a year, in part, because of OMB’s view that agencies’ comments about IRIS assessments represent internal executive branch communications that may not be made public—a view that is inconsistent with the principle of sound science, which relies on, among other things, transparency. (App. II and III provide flow charts of the IRIS process that was in place at the time of our review and EPA’s draft proposed process being considered at the time of our review, respectively). To address the productivity and credibility issues we identified, we recommended that the EPA Administrator require the Office of Research and Development to re-evaluate its draft proposed changes to the IRIS assessment process in light of the issues raised in our report and ensure that any revised process, among other things, clearly defines and documents an IRIS assessment process that will enable the agency to develop the timely chemical risk information it needs to effectively conduct its mission. One of our recommendations—that EPA provide at least 2 years’ notice of IRIS assessments that are planned—would, among other things, provide an efficient alternative to suspending assessments while waiting for new research because interested parties would have the opportunity to conduct research before assessments are started. In addition, we recommended that the EPA Administrator take steps to better ensure that EPA has the ability to develop transparent, credible IRIS assessments—an ability that relies in large part on EPA’s independence in conducting these important assessments. Actions that are key to this ability include ensuring that EPA can (1) determine the types of assessments it needs to support EPA programs and (2) define the appropriate role of external federal agencies in EPA’s IRIS assessment process, and (3) manage an interagency review process in a manner that enhances the quality, transparency, timeliness, and credibility of IRIS assessments. In its February 21, 2008, letter providing comments on our draft report, EPA said it would consider each of our recommendations in light of the new IRIS process the agency was developing. On April 10, 2008, EPA issued a revised IRIS assessment process, effective immediately. Overall, EPA’s revised process is not responsive to the recommendations made in our March 2008 report—it is largely the same as the draft proposed process we evaluated in our March 2008 report (see app. III and IV). Moreover, changes EPA did incorporate into the final process are likely to further exacerbate the productivity and credibility issues we identified in our report. We recommended that EPA ensure that, among other things, any revised process clearly defines and documents a streamlined IRIS assessment process that can be conducted within time frames that minimize the need for wasteful rework. As discussed in our report, when assessments take longer than 2 years, they can become subject to substantial delays stemming from the need to redo key analyses to take into account changing science and assessment methodologies. However, EPA’s revised process institutionalizes a process that the agency estimates will take up to 6 years to complete. Further, the estimated time frames do not factor in the time for peer reviews conducted by the National Academies, which can take 2 years to plan and complete. EPA typically uses reviews by the National Academies for highly controversial chemicals or complex assessments. Therefore, assessments of key chemicals of concern to public health that are reviewed by the National Academies are likely to take at least 8 years to complete. These time frames must also be considered in light of OMB’s view that health assessment values in IRIS are out of date if they are more than 10 years old and if new scientific information exists that could change the health assessment values. Thus, EPA’s new process institutionalizes time frames that could essentially require the agency to start assessment updates as soon as 2 years after assessments are finalized in order to keep the IRIS database current. Such time frames are not consistent with our recommendation that EPA develop, clearly define, and document a streamlined IRIS process that can be conducted within time frames that minimize the need for wasteful rework. Further, the agency would need a significant increase in resources to support such an assessment cycle. In addition, EPA had previously emphasized that, in suspending assessments to allow agencies to fill in data gaps, it would allow no more than 18 months to complete the studies and have them peer reviewed. However, under the new process, EPA states that it generally will allow no more than 18 months to complete the studies and have them peer reviewed. As we concluded in our report, we believe the ability to suspend assessments for up to 18 months would add to the already unacceptable level of delays in completing IRIS assessments. Further, we and several agency officials with whom we spoke believe that the time needed to plan, conduct, and complete research that would address significant data gaps, and have it peer reviewed, would likely exceed 18 months. Therefore, the less rigid time frame EPA included in its new process could result in additional delays. Finally, the new process expands the scope of one of the additional steps that initially was to apply only to chemicals of particular interest to federal agencies. Specifically, under the draft process we reviewed, EPA would have provided an additional review and comment opportunity for federal agencies and the public for what EPA officials said would be a small group of chemicals. However, under EPA’s new process, this additional step has been added to the assessment process for all chemicals and, therefore, will add time to the already lengthy assessments of all chemicals. We also recommended that the EPA Administrator take steps to better ensure that EPA has the ability to develop transparent, credible IRIS assessments—an ability that relies in large part on EPA’s independence in conducting these important assessments. Contrary to our recommendation, EPA has formalized a revised IRIS process that is selectively, rather than fully, transparent, limiting the credibility of the assessments. Specifically, while the draft process we reviewed provided that comments on IRIS assessments from OMB and other federal agencies would be part of the public record, under the recently implemented process, comments from federal agencies are expressly defined as “deliberative” and will not be included in the public record. Given the importance and sensitivity of IRIS assessments, we believe it is critical that input from all parties, particularly agencies that may be affected by the outcome of IRIS assessments, be publicly available. However, under EPA’s new process, input from some IRIS assessment reviewers—representatives of federal agencies, including those facing potential regulatory liability, and private stakeholders associated with these agencies—will continue to receive less public scrutiny than comments from all others. In commenting on a draft of our March 2008 report, and in a recent congressional hearing, EPA’s Assistant Administrator, Office of Research and Development, stated that the IRIS process is transparent because all final IRIS assessments must undergo public and external peer review. However, as we stated in our report, the presence of transparency at a later stage of IRIS assessment development does not explain or excuse its absence earlier. Under the new process, neither peer reviewers nor the public are privy to the changes EPA makes in response to the comments OMB and other federal agencies provide to EPA at several stages in the assessment process—changes to draft assessments or to the questions EPA poses to the peer review panels. Importantly, the first IRIS assessment draft that is released to peer reviewers and to the public includes the undisclosed input from federal agencies potentially subject to regulation and therefore with an interest in minimizing the impacts of IRIS assessments on their budgets and operations. In addition, EPA’s revised process does not provide EPA with sufficient independence in developing IRIS assessments to ensure they are credible and transparent. We made several recommendations aimed at restoring EPA’s independence. For example, we recommended that the EPA Administrator ensure that EPA has the ability to, among other things, define the appropriate role of external federal agencies in the IRIS assessment process and determine when interagency issues have been appropriately addressed. However, under the newly implemented IRIS assessment process, OMB continues to inform EPA when EPA has adequately addressed OMB’s and interagency comments. This determination must be made both before EPA can provide draft assessments to external peer reviewers and to the public and before EPA can finalize and post assessments on the IRIS database. While EPA officials state that ultimately IRIS assessments reflect EPA decisions, the new process does not support this assertion given the clearances EPA needs to receive from OMB to move forward at key stages. In fact, we believe the new IRIS assessment process may elevate the goal of reaching interagency agreement above achieving IRIS program objectives. Further, as discussed above, because the negotiations over OMB/interagency comments are not disclosed, whether EPA is entirely responsible for the content of information on IRIS is open to question. In our report, we also emphasized the importance of ensuring that IRIS assessments be based solely on science issues and not policy concerns. However, under the new IRIS assessment process, EPA has further introduced policy considerations into the IRIS assessment process. That is, the newly implemented IRIS assessment process broadens EPA’s characterization of IRIS assessments from “the agency’s scientific positions on human health effects that may result from exposure to environmental contaminants” to “the agency’s science and science policy positions” on such effects. EPA’s new, broader characterization of IRIS raises concerns about the agency’s stated intent to ensure that scientific assessments are appropriately based on the best available science and that they are not inappropriately impacted by policy issues and considerations. For example, in discussing science and science policy at a recent Senate hearing, EPA’s Assistant Administrator of Research and Development described science policy considerations as including decisions about filling knowledge gaps (e.g., whether and to what extent to use default assumptions) and assessing weight-of-the-evidence approaches to make scientific inferences or assumptions. We believe that these are scientific decisions that should reflect the best judgment of EPA scientists who are evaluating the data, using the detailed risk assessment guidance the agency has developed for such purposes. We have concerns about the manner and extent to which other federal agencies, including those that may be affected by the outcome of assessments, are involved in these decisions as well as the lack of transparency of their input. As we highlighted earlier, under the National Academies’ risk assessment and risk management paradigm, policy considerations are relevant in the risk management phase—which occurs after the risk assessment phase that encompasses IRIS assessments. The National Academies recently addressed this issue as follows: “The committee believes that risk assessors and risk managers should talk with each other; that is, a ‘conceptual distinction’ does not mean establishing a wall between risk assessors and risk managers. Indeed they should have constant interaction. However, the dialogue should not bias or otherwise color the risk assessment conducted, and the activities should remain distinct; that is, risk assessors should not be performing risk management activities.” EPA’s progress in completing assessments continues to be slow—only four final assessments have been completed in fiscal year 2008. Further, these assessments cover four related chemicals within a larger class of chemicals—polybrominated diphenyl ethers—that were processed and peer reviewed together in a single external peer review panel workshop, but finalized as four separate assessments. Moreover, little or no progress has been made on assessments of the key chemicals highlighted in our report—naphthalene, formaldehyde, Royal Demolition Explosive (RDX), trichloroethylene (TCE), tetrachloroethylene (perc), and 2,3,7,8-tetrachlorodibenzo-p-dioxin (dioxin). At the time of our March 2008 report, all of these assessments, with the exception of tetrachloroethylene, were in the draft development stage. As of September 11, 2008, according to IRIS Track, none of these assessments had moved to the next step—agency review. EPA’s current estimates for completing these assessments raise further concerns. For example, at the time of our report, EPA estimated that it would complete the naphthalene assessment in 2009, which would have reflected a total assessment time frame of 7 years. However, since that time, EPA has updated its estimates in IRIS Track, which now indicate that naphthalene will not be completed until November 2011. In addition, EPA does not have any estimate of when it expects to complete three of the assessments—dioxin, RDX, and TCE. The estimated completion dates are listed as “to be determined” for these chemicals. This is particularly concerning for dioxin and TCE, which have already been in progress for over 17 years and over 10 years, respectively. The new IRIS assessment process that EPA implemented in April 2008 will not allow the agency to routinely and timely complete credible assessments. In fact, it will exacerbate the problems we identified in our March 2008 report and sought to address with our recommendations—all of which were aimed at preserving the viability of this critical database, which is integral to EPA’s mission of protecting the public and the environment from exposure to toxic chemicals. Specifically, under the new process, assessment time frames will be significantly lengthened, and the lack of transparency will further limit the credibility of the assessments because input from OMB and other agencies at all stages of the IRIS assessment process is now expressly defined as deliberative and therefore not subject to public disclosure. The position of the Assistant Administrator, Office of Research and Development, that the IRIS process is transparent because all final IRIS assessments must undergo public and external peer review is unconvincing. Transparency at a later stage of the IRIS assessment process—after OMB and other federal agencies have had multiple opportunities to influence the content of the assessment without any disclosure of their input—does not compensate for its absence earlier. We continue to believe that to effectively maintain IRIS EPA must streamline its lengthy assessment process and adopt transparency practices that provide assurance that IRIS assessments are appropriately based on the best available science and that they are not inappropriately biased by policy issues and considerations. As discussed in our April 29, 2008, testimony before the Senate Environment and Public Works Committee and our May 21, 2008, testimony before the House Subcommittee on Investigations and Oversight, Committee on Science and Technology, we believe that the Congress should consider requiring EPA to suspend implementation of its new IRIS assessment process and develop a streamlined process that is transparent and otherwise responsive to our recommendations aimed at improving the timeliness and credibility of IRIS assessments. For example, suspending assessments to obtain additional research is inefficient; alternatively, with longer-term planning, EPA could provide agencies and the public with more advance notice of assessments, enabling them to complete relevant research before IRIS assessments are started. In addition, as discussed in our testimonies, the Congress should consider requiring EPA to obtain and be responsive to input from the Congress and the public before finalizing a revised IRIS assessment process. We note that while EPA and OMB initially had planned for EPA to release a draft revised IRIS assessment process to the public, hold a public meeting to discuss EPA’s proposed changes, and seek and incorporate public input before finalizing the process, EPA released its new assessment process without obtaining public input and made it effective immediately. This was inconsistent with assertions made in OMB’s letter commenting on our draft report, which emphasized that EPA had not completed the development of the IRIS assessment process and stated: “Indeed, the process will not be complete until EPA circulates its draft to the public for comments and then releases a final product that is responsive to those comments.” Finally, if EPA is not able to take the steps we have recommended to effectively maintain this critical program, other approaches, including statutory requirements, may need to be explored. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact John B. Stephenson on (202) 512-3841 or stephensonj@gao.gov. Contact points for our Congressional Relations and Public Affairs Offices may be found on the last page of this statement. Contributors to this testimony include Christine Fishkin (Assistant Director), Laura Gatz, Richard P. Johnson, and Nancy Crothers. Some key IRIS assessments have been in progress for a number of years, in part because of delays stemming from one or more of the key factors we identified that have hindered EPA’s productivity. Examples include the following: Naphthalene. EPA started the IRIS assessment of cancer risks stemming from the inhalation of naphthalene in 2002. Naphthalene is used in jet fuel and in the production of widely used commercial products such as moth balls, dyes, insecticides, and plasticizers. According to a presentation delivered at the 2007 annual meeting of the Society for Risk Analysis by an Army Corps of Engineers toxicologist, “The changing naphthalene regulatory environment includes a draft EPA risk assessment that if/when finalized, will change naphthalene’s status from ‘possible’ to ‘likely’ human carcinogen.” Thus, according to this presentation, one potential impact of this IRIS assessment on DOD is that DOD would need to provide many employees exposed to naphthalene with equipment measuring their exposure to the chemical. In addition, because many military bases are contaminated with naphthalene, a component of jet fuel (approximately 1 percent to 3 percent) used by all DOD services, DOD could face extensive cleanup costs. By 2004, 2 years after starting the assessment, EPA had drafted a chemical assessment that had completed internal peer reviews and was about to be sent to an external peer review committee. Once it returned from external review, the next step, at that time, would have been a formal review by EPA’s IRIS Agency Review Committee. If approved, the assessment would have been completed and released. However, in part because of concerns raised by DOD, OMB asked to review the assessment and conducted an interagency review of the draft. In their 2004 reviews of the draft IRIS assessment, both OMB and DOD raised a number of concerns about the assessment and suggested to EPA that it be suspended until additional research could be completed to address what they considered to be significant uncertainties associated with the assessment. Although all of the issues raised by OMB and DOD were not resolved, EPA continued with its assessment by submitting the draft for external peer review, which was completed in September 2004. However, according to EPA, OMB continued to object to the draft IRIS assessment and directed EPA to convene an additional expert review panel on genotoxicity to obtain recommendations about short-term tests that OMB thought could be done quickly. According to EPA, this added 6 months to the process, and the panel, which met in April 2005, concluded that the research that OMB was proposing could not be conducted in the short term. Nonetheless, EPA officials said that the second expert panel review did not eliminate OMB’s concerns regarding the assessment, which they described as reaching a stalemate. In September 2006, EPA decided, however, to proceed with developing the assessment. By this time, the naphthalene assessment had been in progress for over 4 years; EPA decided that the IRIS noncancer assessment, issued in 1998, was outdated and needed to be revisited. Thus, EPA expanded the IRIS naphthalene assessment to include both noncancer and cancer assessments. As a result, 6 years after the naphthalene assessment began, it is now back at the drafting stage. The assessment now will need to reflect relevant research completed since the draft underwent initial external peer review in 2004, and it will have to undergo all of the IRIS assessment steps again, including additional internal and external reviews that are now required. This series of delays has limited EPA’s ability to conduct its mission. For example, the Office of Air and Radiation has identified the naphthalene assessment as one of its highest-priority needs for its air toxics program. In addition, the Office of Solid Waste and Emergency Response considers the naphthalene assessment a high priority for the Superfund program— naphthalene has been found in at least 654 of Superfund’s current or former National Priorities List sites. At the time of our March 2008 report, EPA estimated that it would complete the assessment in 2009. The agency has since updated its estimate to reflect a later expected completion date, November 30, 2011. Royal Demolition Explosive. This chemical, also called RDX or hexahydro-1,3,5-trinitrotriazine, is a highly powerful explosive used by the U.S. military in thousands of munitions. Currently classified by EPA as a possible human carcinogen, this chemical is known to leach from soil to groundwater. Royal Demolition Explosive can cause seizures in humans and animals when large amounts are inhaled or ingested, but the effects of long-term, low-level exposure on the nervous system are unknown. As is the case with naphthalene, the IRIS assessment could potentially require DOD to undertake a number of actions, including steps to protect its employees from the effects of this chemical and to clean up many contaminated sites. Although EPA started an IRIS assessment of Royal Demolition Explosive in 2000, it has made minimal progress on the assessment because EPA agreed to a request by DOD to wait for the results of DOD-sponsored research on this chemical. In 2007, EPA began to actively work on this assessment, although some of the DOD-sponsored research is still outstanding. Formaldehyde. EPA began an IRIS assessment of formaldehyde in 1997 because the existing assessment was determined to be outdated. Formaldehyde is a colorless, flammable, strong-smelling gas used to manufacture building materials, such as pressed wood products, and used in many household products, including paper, pharmaceuticals, and leather goods. While EPA currently classifies formaldehyde as a probable human carcinogen, the International Agency for Research on Cancer (IARC), part of the World Health Organization, classifies formaldehyde as a known human carcinogen. Since 1986, studies of industrial of workers have suggested that formaldehyde exposure is associated with nasopharyngeal cancer, and possibly with leukemia. For example, in 2003 and 2004, the National Cancer Institute (NCI) and the National Institute of Occupational Safety and Health (NIOSH) released epidemiological studies following up on earlier studies tracking about 26,000 and 11,000 industrial workers, respectively, exposed to formaldehyde; the updates showed exposure to formaldehyde might also cause leukemia in humans, in addition to the cancer types previously identified. According to NCI officials, the key findings in their follow-up study were an increase in leukemia deaths and, more significantly, an exposure/response relationship between formaldehyde and leukemia—as exposure increased, the incidence of leukemia also rose. As with the earlier study, NCI found more cases of a rare form of cancer, nasopharyngeal cancer, than would usually be expected. The studies from NCI and NIOSH were published in 2003 and 2004, around the time that EPA was still drafting its IRIS assessment. In November 2004, the Chairman of the Senate Environment and Public Works Committee requested that EPA delay completion of its IRIS assessment until an update to the just-released NCI study could be conducted, indicating that the effort would take, at most, 18 months. EPA agreed to wait—and almost 4 years later, the NCI update is not yet complete. NCI plans to release the results of its study in two publications—one focused on lymphatic and hematopoietic tumors and effects (including leukemia) and the other focused on other effects they observed (including other cancers). NCI estimates that the manuscript for the first stage may be published by early 2009. The second manuscript will likely not be published until late 2009. An NCI official said that the additional leukemia deaths identified in the update provide “greater power” to detect associations between exposure to formaldehyde and cancer. EPA’s inability to complete the IRIS assessment it started more than 10 years ago in a timely manner has had a significant impact on EPA’s air toxics program. Specifically, when EPA promulgated a national emissions standard for hazardous air pollutants covering facilities in the plywood and composite wood industries in 2004, EPA’s Office of Air and Radiation took the unusual step of not using the existing IRIS estimate but rather decided to use a cancer risk estimate developed by an industry- funded organization, the CIIT Centers for Health Research (formerly, the Chemical Industry Institute of Toxicology) that had been used by the Canadian health protection agency. The IRIS cancer risk factor had been subject to criticism because it was last revised in 1991 and was based on data from the 1980s. In its final rule, EPA stated that “the dose-response value in IRIS is based on a 1987 study, and no longer represents the best available science in the peer-reviewed literature.” The CIIT quantitative cancer risk estimate that EPA used in its health risk assessment in the plywood and composite wood national emissions standard indicates a potency about 2,400 times lower than the estimate in IRIS that was being re-evaluated and that did not yet consider the 2003 and 2004 NCI and NIOSH epidemiological studies. According to an EPA official, an IRIS cancer risk factor based on the 2003 and 2004 NCI and NIOSH studies would likely be close to the current IRIS assessment, which EPA has been attempting to update since 1997. The decision to use the CIIT assessment in the plywood national emissions standard was controversial, and officials in EPA’s National Center for Environmental Assessment said the center identified numerous problems with the CIIT estimate. Nonetheless, the Office of Air and Radiation used the CIIT value, and that decision was a factor in EPA exempting certain facilities with formaldehyde emissions from the national emissions standard. In June 2007, a federal appellate court struck down the rule, holding that EPA’s decision to exempt certain facilities that EPA asserted presented a low health risk exceeded the agency’s authority under the Clean Air Act. Further, the continued delays of the IRIS assessment of formaldehyde—currently estimated to be completed at the end of 2009 but after almost 11 years still in the draft development stage—will impact the quality of other EPA regulatory actions, including other air toxics rules and requirements. Trichloroethylene. Also known as TCE, this chemical is a solvent widely used as a degreasing agent in industrial and manufacturing settings; it is a common environmental contaminant in air, soil, surface water, and groundwater. TCE has been linked to cancer, including childhood cancer, and other significant health hazards, such as birth defects. TCE is the most frequently reported organic contaminant in groundwater, and contaminated drinking water has been found at Camp Lejeune, a large Marine Corps base in North Carolina. TCE has also been found at Superfund sites and at many industrial and government facilities, including aircraft and spacecraft manufacturing operations. In 1995, the International Agency for Research on Cancer classified TCE as a probable human carcinogen, and in 2000, the Department of Health and Human Services’ National Toxicology Program concluded that it is reasonably anticipated to be a human carcinogen. Because of questions raised by peer reviewers about the IRIS cancer assessment for TCE, EPA withdrew it from IRIS in 1989 but did not initiate a new TCE cancer assessment until 1998. In 2001, EPA issued a draft IRIS assessment for TCE that proposed a range of toxicity values indicating a higher potency than in the prior IRIS values and characterizing TCE as “highly likely to produce cancer in humans.” The draft assessment, which became controversial, was peer reviewed by EPA’s Scientific Advisory Board and released for public comment. A number of scientific issues were raised during the course of these reviews, including how EPA had applied emerging risk assessment methods—such as assessing cumulative effects (of TCE and its metabolites) and using a physiologically based pharmacokinetic model— and the uncertainty associated with the new methods themselves. To help address these issues, EPA, DOD, DOE, and NASA sponsored a National Academies review to provide guidance. The National Academies report, which was issued in 2006, concluded that the weight of evidence of cancer and other health risks from TCE exposure had strengthened since 2001 and recommended that the risk assessment be finalized with currently available data so that risk management decisions could be made expeditiously. The report specifically noted that while some additional information would allow for more precise estimates of risk, this information was not necessary for developing a credible risk assessment. Nonetheless, 10 years after EPA started its IRIS assessment, the TCE assessment is back at the draft development stage. At the time of our March 2008 report, EPA estimated that this assessment would be finalized in 2010. Since that time, EPA has modified the expected completion date to “to be determined.” More in line with the National Academies’ recommendation to act expeditiously, five senators introduced a bill in August 2007 that, among other things, would require EPA to both establish IRIS values for TCE and issue final drinking water standards for this contaminant within 18 months. Tetrachloroethylene. EPA started an IRIS assessment of tetrachloroethylene—also called perchloroethylene or “perc”—in 1998. Tetrachloroethylene is a manufactured chemical widely used for dry cleaning of fabrics, metal degreasing, and making some consumer products and other chemicals. Tetrachloroethylene is a widespread groundwater contaminant, and the Department of Health and Human Services’ National Toxicology Program has determined that it is reasonably anticipated to be a carcinogen. The IRIS database currently contains a 1988 noncancer assessment based on oral exposure that will be updated in the ongoing assessment. Importantly, the ongoing assessment will also provide a noncancer inhalation risk and a cancer assessment. The IRIS agency review of the draft assessment was completed in February 2005, the draft assessment was sent to OMB for OMB/interagency review in September 2005, and the OMB/interagency review was completed in March 2006. EPA had determined to have the next step, external peer review, conducted by the National Academies—the peer review choice reserved for chemical assessments that are particularly significant or controversial. EPA contracted with the National Academies for a review by an expert panel, and the review was scheduled to start in June 2006 and be completed in 15 months. However, as of December 2007, the draft assessment had not yet been provided to the National Academies. After verbally agreeing with both the noncancer and cancer assessments following briefings on the assessments, the Assistant Administrator, Office of Research and Development, subsequently requested that additional uncertainty analyses—including some quantitative analyses—be conducted and included in the assessment before the draft was released to the National Academies for peer review. As discussed in our March 2008 report on IRIS (GAO-08-440), quantitative uncertainty analysis is a risk assessment tool that is currently being developed, and although the agency is working on developing policies and procedures for uncertainty analysis, such guidance currently does not exist. At the time of our March 2008 report, we indicated that the draft tetrachloroethylene assessment had been delayed since early 2006 as EPA staff went back and forth with the Assistant Administrator trying to reach agreement on key issues such as whether a linear or nonlinear model was most appropriate for the cancer assessment and how uncertainty should be qualitatively and quantitatively characterized. EPA officials and staff noted that some of the most experienced staff were being used for these efforts, limiting their ability to work on other IRIS assessments. In addition, we noted that the significant delay had impacted the planned National Academies peer review because the current contract, which has already been extended once, cannot be extended beyond December 2008. The peer review was initially estimated to take 15 months. Since the time of our March 2008 report, EPA released the draft assessment for public comment and held a “listening session” to receive comments. However, the agency has not yet announced a date or a location for the external peer review in the Federal Register. Dioxin. The dioxin assessment is an example of an IRIS assessment that has been, and will likely continue to be, a political as well as a scientific issue. Often the byproducts of combustion and other industrial processes, complex mixtures of dioxins enter the food chain and human diet through emissions into the air that settle on soil, plants, and water. EPA’s initial dioxin assessment, published in 1985, focused on the dioxin TCDD (2,3,7,8-tetrachlorodibenzo-p-dioxin) because animal studies in the 1970s showed it to be the most potent cancer-causing chemical studied to date. Several years later, EPA decided to conduct a reassessment of dioxin because of major advances that had occurred in the scientific understanding of dioxin toxicity and significant new studies on dioxins’ potential adverse health effects. Initially started in 1991, this assessment has involved repeated literature searches and peer reviews. For example, a draft of the updated assessment was reviewed by a scientific peer review panel in 1995, and three panels reviewed key segments of later versions of the draft in 1997 and 2000. In 2002, EPA officials said that the assessment would conclude that dioxin may adversely affect human health at lower exposure levels than had previously been thought and that most exposure to dioxins occurs from eating such American dietary staples as meats, fish, and dairy products, which contain minute traces of dioxins. These foods contain dioxins because animals eat plants and commercial feed and drink water contaminated with dioxins, which then accumulate in animals’ fatty tissue. It is clear that EPA’s dioxin risk assessment could have a potentially significant impact on consumers and on the food and agriculture industries. As EPA moved closer to finalizing the assessment, in 2003 the agency was directed in a congressional appropriations conference committee report to not issue the assessment until it had been reviewed by the National Academies. The National Academies provided EPA with a report in July 2006. In developing a response to the report, which the agency is currently doing, EPA must include new studies and risk assessment approaches that did not exist when the assessment was drafted. EPA officials said the assessment will be subject to the IRIS review process once its response to the National Academies’ report is drafted. As of 2008, EPA has been developing the dioxin assessment, which has potentially significant health implications for all Americans, for 17 years. No, it is not mission critical. Development of a draft qualitative assessment critical? Yes, it is mission critical. No, there is no new research to close data gaps. gaps? Ye, there is interest in conducting research to close data gaps. Is the chemical mission critical? Darker shaded boxes are additional steps, under EPA’s planned changes, to its assessment process and indicate steps where EPA has provided additional opportunity for input from potentially affected federal agencies for mission-critical chemicals. Lighter shaded boxes with dotted lines indicate steps where EPA has provided additional opportunity for input from potentially affected federal agencies for all chemicals. White boxes with heavy lines indicate steps where potentially affected federal agencies already had an opportunity for input. No, it is not mission critical. critical? Ye, it is mission critical. new research to close data gaps. gaps? Ye, there is interest in conducting research to close data gaps. Is the chemical mission critical? Darker shaded boxes are additional steps under EPA’s changes to its assessment process and indicate where EPA has provided additional opportunity for input from potentially affected federal agencies for mission-critical chemicals. Lighter shaded boxes with dotted lines indicate steps where EPA has provided additional opportunity for input from potentially affected federal agencies for all chemicals. White boxes with heavy lines indicate steps where potentially affected federal agencies already had an opportunity for input. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Environmental Protection Agency's (EPA) Integrated Risk Information System (IRIS) contains EPA's scientific position on the potential human health effects of exposure to more than 540 chemicals. Toxicity assessments in the IRIS database constitute the first two critical steps of the risk assessment process, which in turn, provides the foundation for risk management decisions. Thus, IRIS is a critical component of EPA's capacity to support scientifically sound environmental decisions, policies, and regulations. This testimony discusses (1) highlights of GAO's March 2008 report, Chemical Assessments: Low Productivity and New Interagency Review Process Limit the Usefulness and Credibility of EPA's Integrated Risk Information System; (2) key aspects of EPA's revised IRIS assessment process, released on April 10, 2008; and (3) progress EPA has made in completing assessments in fiscal year 2008. For the March 2008 report, GAO reviewed and analyzed EPA data and interviewed officials at relevant agencies, including the Office of Management and Budget (OMB). For this testimony, GAO supplemented the prior audit work with a review of EPA's revised IRIS assessment process announced on April 10, 2008. We also updated our information on EPA's assessment productivity through September 12, 2008. In March 2008, GAO concluded that the IRIS database was at serious risk of becoming obsolete because EPA had not been able to complete timely, credible assessments or decrease its backlog of 70 ongoing assessments--a total of 4 were completed in fiscal years 2006 and 2007. In addition, assessment process changes EPA had recently made, as well as other changes EPA was considering at the time of GAO's review, would further reduce the credibility and timeliness of IRIS assessments. We concluded the following: EPA's efforts to finalize assessments have been thwarted by a combination of factors, including two new OMB-required reviews of IRIS assessments by OMB and other federal agencies and by EPA management decisions, such as delaying some assessments to await new research. The two new OMB/interagency reviews of draft assessments involve other federal agencies in EPA's IRIS assessment process in a manner that limits the credibility of IRIS assessments and hinders EPA's ability to manage them. For example, the OMB/interagency reviews lack transparency, and OMB required EPA to terminate five assessments EPA had initiated to help it implement the Clean Air Act. The changes to the IRIS assessment process that EPA was considering, but had not yet issued at the time of our review, would have added to the already unacceptable level of delays in completing IRIS assessments and further limited the credibility of the assessments. EPA issued its revised IRIS assessment process in April 2008. The new process is largely the same as the draft GAO evaluated and does not respond to the recommendations in GAO's March 2008 report. Moreover, some key changes are likely to further exacerbate the productivity and credibility concerns GAO identified. For example, while the draft process would have made comments from other federal agencies on IRIS assessments part of the public record, EPA's new process defines such comments as "deliberative" and excludes them from the public record. GAO continues to believe that it is critical that input from all parties--particularly agencies that may be affected by the outcome of IRIS assessments--be publicly available. In addition, the estimated time frames under the new process, especially for chemicals of key concern, will likely perpetuate the cycle of delays to which the majority of ongoing assessments have been subject. Instead of streamlining the process, as GAO recommended, EPA has institutionalized a process that from the outset is estimated to take 6 to 8 years. This is problematic because of the substantial rework such cases often require to take into account changing science and methodologies. EPA's progress in completing assessments continues to be slow--only four assessments have been completed in fiscal year 2008. Further, these assessments cover a group of four related chemicals that were processed and peer reviewed together but finalized individually. Little or no progress has been made on assessments of chemicals highlighted in our report, including naphthalene, formaldehyde, and trichloroethylene (TCE).
The National Aeronautics and Space Administration Authorization Act of 2010 directed NASA to develop a Space Launch System as a follow-on to the Space Shuttle and as a key component in expanding human presence beyond low-earth orbit. The Act also directed NASA to continue development of a multi-purpose crew vehicle for use with that system.that end, NASA plans to incrementally develop three progressively-larger SLS launch vehicle capabilities—70-, 105- and 130-metric ton (MT) variants—complemented by the Orion and supporting ground systems. To Figure 1 below illustrates NASA’s planned capabilities for the SLS, Orion, and some of the related GSDO efforts. These capabilities follow the agency’s previous attempt to develop a next- generation human spaceflight system, the Constellation program, which was cancelled in 2010 when the program’s budget proved inadequate to resolve technical challenges. The first version of the SLS being developed is a 70-metric ton launch vehicle known as Block I. NASA expects to conduct two test flights of the Block I vehicle—the first in 2017 and the second in 2021. The vehicle is scheduled to fly some 700,000 kilometers beyond the moon during the first test flight, known as Exploration Mission-1 (EM-1), and to fly a second mission, known as Exploration Mission-2 (EM-2), to test additional aspects of its performance. After 2021, NASA intends to build 105- and 130-metric ton launch vehicles, known respectively as Block IA/B and Block II, which it expects to use as the backbone of manned spaceflight for decades. NASA anticipates that these launch vehicles will require the development of new systems to achieve the agency’s goals for carrying greater amounts of cargo and traveling farther into space. The agency has not yet selected specific missions for the increased capabilities to be provided by Block IA/B and Block II but, in keeping with the language contained in the 2010 Authorization Act, anticipates using the vehicles for such deep- space destinations as near-Earth asteroids and Mars. In concert with SLS, NASA expects to evolve the Orion and ground systems. The agency plans an un-crewed Orion capsule to fly atop the SLS during EM-1 in 2017, a crewed capsule during EM-2 in 2021, and ultimately, at a date to be determined, a crewed capsule with capability for such missions as a Mars landing. NASA is also modifying the existing ground systems so that they can support the SLS Block I variant and eventually accommodate the Block IA/B and Block II launch vehicles as well as enhanced versions of the Orion crew capsule. For example, NASA plans to add moveable floors to the vehicle assembly building at Kennedy Space Center so that the three launch vehicle variants can be more easily prepared for flight as the SLS capability evolves. NASA established the preliminary cost estimates for the initial capabilities of the SLS, Orion, and associated GSDO as each of these programs entered the preliminary design and technology completion phase of development, known as key decision point B (KDP-B). At KDP-B, programs use a probability-based analysis to develop a range of preliminary cost and schedule estimates which are used to inform the budget planning for the programs. This phase culminates in a review at key decision point C (KDP-C), known as program confirmation, where cost and schedule baselines with point estimates are established and documented in the agency baseline commitment. After this review, programs are considered to be in the implementation phase of development, and program progress is subsequently measured against these baselines. NASA plans to hold the program confirmation review for SLS in spring 2014 and expects to conduct the KDP-C review for GSDO in May 2014 and Orion in December 2014. Because the life cycle costs of these programs are expected to exceed $250 million, NASA is required to report the programs’ baseline estimates to Congress once the programs are approved to move into implementation. The agency provides this information through its annual budget submission. NASA also uses the annual budget submission to inform Congress about the preliminary cost ranges for projects proceeding into formulation. NASA’s preliminary cost estimates for the SLS, Orion, and associated GSDO programs do not provide a complete picture of the costs required to develop and operate the programs through the entire course of their respective life cycles. These preliminary estimates include the funding required for the scope of work related to initial capabilities—that is, development and operations through 2017 for the SLS launch vehicle and ground systems and through 2021 for the Orion. NASA also expects to use this same limited scope of work to develop the SLS, Orion, and GSDO baseline cost estimates. Moreover, NASA’s estimates do not capture the cost of the second flight of the 70-metric ton vehicle during EM-2, the costs of development work that will be necessary to fly the increased 105- and 130-metric ton SLS capabilities, and the costs associated with legacy hardware that will be used for the Orion program. In contrast, best practices for cost estimation call for “cradle to grave” life cycle cost estimates in order to help assess a program’s long-term affordability. NASA’s preliminary cost estimates for the three programs’ initial capabilities total a low-to-high cost range of approximately $19 to $22 billion. Table 1 below depicts the scope, including content and schedule, of the SLS, Orion, and GSDO initial capabilities’ preliminary cost estimates. As the SLS, Orion, and GSDO programs move from formulation into implementation phases, NASA plans to use the same content and scope for calculating the programs’ respective baseline cost estimates. NASA’s preliminary cost estimates for SLS, Orion, and GSDO provide no information about the longer-term, life cycle costs of developing, manufacturing, and operating the launch vehicle, crew capsule, and ground systems: The SLS estimate does not cover the cost to build the second 70- metric ton vehicle and conduct EM-2 in 2021 with that vehicle. NASA is already incurring costs for EM-2 because it is funding some EM-2 development in concert with EM-1 efforts, such as work on the solid rocket boosters and core stage that are expected to help power the 70-metric ton SLS. NASA officials indicated at one point in our review that they did not expect to begin formally tracking EM-2 costs until after the SLS design’s maturity was assessed at a critical design review scheduled for 2015; however, the agency stated in technical comments to this report that it is tracking those costs for budget purposes and plans to begin formally reporting them once SLS reaches the project confirmation phase. Additionally, the SLS estimate does not address the potential for costs NASA would incur to produce, operate, and sustain flights of the 70-MT Block I capability beyond 2021. NASA officials stated that there are currently no plans to fly that vehicle beyond 2021, but that the agency could reassess its decision if a specific mission arises for the vehicle. The SLS estimate also does not include costs to design, develop, build, and produce the 105- or 130-metric ton Block IA/B and Block II SLS variants that NASA intends to use well into the future. NASA indicated that these variants will require new systems development efforts—including advanced boosters and a new upper stage to meet the greater performance requirements associated with larger payloads as well as travel to Mars or other deep-space locations. NASA has started funding concept development, trades, and analyses related to these new designs, such as assessing the use of lightweight materials to construct the upper stage and selective laser melting to produce system components. In addition, NASA anticipates a re-start of the production line for the RS-25 engine that it plans to use to power the Block IA/B and Block II vehicles. Currently, the agency has enough residual RS-25 liquid-fuel engines from the Space Shuttle program to launch the SLS for up to 4 flights. NASA expects to need more of the engines beyond that, but it has not yet finalized acquisition plans to manufacture them. According to agency officials, re-starting the production line would entail at least 3 years, whereas development of a new engine would require a minimum of 8 years. The Orion estimate does not address costs for production, operations, or sustainment of additional crew capsules after 2021 nor does it address prior costs incurred when Orion was being developed as part of the now-defunct Constellation program. NASA initiated the crew capsule’s development in 2006 as part of the Constellation program. During approximately 4 years that the capsule’s development occurred under Constellation, the agency spent about $4.7 billion for the capsule’s design and development. When Constellation was cancelled in 2010 and the work transitioned to the current Orion program, however, NASA excluded the Constellation-related costs from Orion’s current preliminary cost estimate of $8.5 to $10.3 billion through 2021. The GSDO estimate does not address the costs to develop or operate SLS ground systems infrastructure beyond EM-1 in 2017, although NASA intends to modify ground architecture to accommodate all SLS variants. NASA officials have indicated that the road ahead involves many decisions about the programs beyond 2021, including how development will proceed, what missions will be performed, when the programs will end, and how each effort will be managed. They noted that the agency is using a capability-based approach to SLS, Orion, and the associated GSDO development, in which system capability grows over time. They indicated that the programs’ preliminary cost estimates are for attainment of capabilities rather than the full cost of the programs, and that it is difficult to define life cycle costs because the programs’ intended long- term uses and life spans have not been fully determined. According to NASA, the agency is developing a tailored definition for life cycle cost estimating that is allowed by NASA requirements. Because the missions drive the number and types of vehicles, crew capsules, and ground systems that would be required, as missions are defined, NASA officials said they would be in a better position to estimate the programs’ life cycle costs. The officials stated that NASA is looking ahead to future costs as much as possible, and NASA indicated in technical comments to this report that the SLS program plans to begin formally reporting costs for the launch vehicle’s EM-2 after the program’s anticipated confirmation in spring 2014. We recognize that defining life cycle costs can be difficult when uncertainties exist. However, in contrast to NASA’s tailored approach, both widely-accepted best practices for cost estimation and the agency’s own requirements support the need for full life cycle cost estimates. Even when uncertainties exist, best practices maintain that a high-quality cost estimate takes into account those uncertainties while forecasting the minimum and maximum range of all life cycle costs. The best practices, developed by the GAO in concert with the public and private sector cost estimating communities, call for “cradle to grave” life cycle cost estimates and maintain that life cycle cost estimates should provide an exhaustive, structured accounting of all resources and associated cost elements required to develop, produce, deploy, and sustain a particular program. This entails identification of all pertinent cost elements, from initial concept through operations, support, and disposal. Likewise, NASA’s program management requirements direct that programs develop a preliminary full life cycle cost estimate. In accordance with the agency’s guidance regarding life cycle costs, such an estimate would encompass total costs from the formulation through the implementation phase, including design, development, mission operations, support, and disposal activities. According to best practices, because life cycle estimates encompass all possible costs, they provide a wealth of information about how much programs are expected to cost over time. Life cycle cost estimates, including a range for preliminary costs as directed by NASA requirements for programs in the formulation phase, enhance decision making, especially in early planning and concept formulation of acquisition. High- quality cost estimates, as noted by best practices, can support budgetary decisions, key decision points, milestone reviews, and investment decisions. For example, a preliminary life cycle cost estimate provides the basis of the financial investment that the agency is committing the government to, while a baseline life cycle cost estimate forms the basis for measuring cost growth over time. Because NASA expects to continue with a limited scope for the SLS, Orion, and baseline estimates, however, cost growth over time within the programs will be difficult to identify and could be masked as growth in the SLS capability if the most current cost estimate did not contain the same content as the baseline estimate. As noted in best practices for cost estimating, the quality of a program’s cost estimate is also key to determining its affordability, that is, the degree to which a program’s funding requirements fit within an agency’s overall portfolio plan. However, NASA’s preliminary cost estimates do not address the affordability of increased capabilities because they exclude the life cycle costs associated with the SLS Block IA/B and Block II launch vehicles that the agency intends to use well into the future. According to agency officials at the time of our review, NASA has not yet decided whether it will manage the Block IA/B and Block II development efforts as individual programs and, if so, what the programs’ scope would be. Best practices for cost estimating look favorably on the incremental development approach NASA has chosen for SLS, and they also state that programs following such an approach should clearly define the characteristics of each increment of capability so that a rigorous life cycle cost estimate can be developed. In addition, we have previously concluded that it is prudent for an agency to manage increasing capabilities of an existing program on par with the investments yet to come and in a way that is beneficial for oversight. For example, we have recommended that agencies developing weapon systems in increments consider establishing each increment of increased capability with its own cost and schedule baseline. According to cost estimating best practices, dividing programs into smaller pieces makes management and testing easier and helps avoid unrealistic cost estimates, resulting in more realistic long-range investment funding and more effective resource allocation. These are important considerations given that NASA is likely to spend billions of dollars beyond its initial investment of up to $22 billion to develop the increased capabilities. Development of human-rated liquid- fueled engines, for example, has been among the most difficult, time- intensive, and costly parts of launch vehicle development. As a case in point, NASA spent about 8 years and $1.5 billion to develop a human- rated engine known as J-2X for use on Ares launch vehicles within the agency’s now-defunct Constellation program. NASA has faced issues with affordability of its manned space flight investments and other major projects in the past, and those affordability issues have sometimes contributed to a program’s cancellation. For example, NASA originally envisioned that the Space Shuttle would fly up to 100 times per vehicle at a cost of $7.7 million per launch. In reality, the Shuttle flew 135 times in total over a period of 30 years at a cost that was about $3.5 billion per year around the 2008 timeframe. Amid concerns that included the Shuttle’s costs and safety, the program ended. NASA then focused on building human spaceflight alternatives that included Constellation. In 2010, Constellation was canceled because, as noted by NASA’s Administrator, the program could not return astronauts to the moon at an affordable cost and would require far more funding to make the agency’s approach viable. In a recent example noted in the agency’s 2015 presidential budget request, NASA may place in storage the Stratospheric Observatory for Infrared Astronomy, an airborne observatory for studying astronomical objects and phenomena, after spending some 23 years and more than $1 billion to develop the project. The agency cited high operating costs, estimated at some $1.8 billion over the project’s planned life, as a factor in its considerations. The SLS, Orion, and GSDO programs NASA has established to fulfill its mandate of providing the capability for transporting humans to space are well underway. These programs represent a significant investment for the country—as much as $22 billion for initial capabilities and potentially billions more to field increased capabilities over time as envisioned in the 2010 NASA Authorization Act. Given the goals that have been outlined for NASA as part of the National Space Transportation Policy, the success of these programs is to be measured not only by the capability that is achieved but also by NASA’s ability to achieve them within a reasonable timeframe and cost to the U.S. taxpayer. As such, establishing these programs with both near-term and long-term affordability in mind is key. The limited scope that NASA has chosen to use as the basis for formulating the programs’ cost baselines, however, does not provide the transparency necessary to assess long-term affordability and will hamper oversight by those tasked with assessing whether the agency is progressing in a cost-effective and affordable manner. If the SLS, Orion, and GSDO baseline cost estimates cannot be compared to current costs, the baseline estimates lose their usefulness because they no longer serve as a means to hold NASA accountable for cost growth and program progress. Furthermore, if NASA does not clearly delineate costs for operations and sustainment of the initial capabilities or separate cost and schedule baselines for upcoming capabilities, then it will be difficult to assess program affordability and for the Congress to make informed, long-term budgetary decisions. Estimates that use all available information to establish a potential range of costs for the full scope of these upcoming capabilities can help inform such decisions. To provide the Congress with the necessary insight into program affordability, ensure its ability to effectively monitor total program costs and execution, and to facilitate investment decisions, we recommend that NASA’s Administrator direct the Human Exploration and Operations Mission Directorate take the following 3 actions: Establish a separate cost and schedule baseline for work required to support the SLS Block I EM-2 and report this information to the Congress through NASA’s annual budget submission. If NASA decides to fly the SLS Block I beyond EM-2, establish separate life cycle cost and schedule baseline estimates for those efforts, to include funding for operations and sustainment, and report this information annually to Congress via the agency’s budget submission. Because NASA intends to use the increased capabilities of the SLS, Orion, and GSDO efforts well into the future and has chosen to estimate costs associated with achieving the capabilities, establish separate cost and schedule baselines for each additional capability that encompass all life cycle costs, to include operations and sustainment. When NASA cannot fully specify costs due to lack of well-defined missions or flight manifests, forecast a cost estimate range — including life cycle costs — having minimum and maximum boundaries. These baselines or ranges should be reported to Congress annually via the agency’s budget submission. Because a significant amount of the original Orion development work occurred under the Constellation program, include those costs in the baseline cost estimate for the Orion program. NASA provided written comments on a draft of this report. These comments are reprinted in Appendix I. In responding to a draft of our report, NASA partially concurred with our three recommendations, citing among other reasons that actions already in place at the time of our review such as establishing SLS, Orion, GSDO as separate programs and a block upgrade approach for SLS—and actions it plans to take to track costs—met the intent of our recommendations. In most cases, the actions that NASA plans to take do not fully address the issues we raised in this report. We continue to believe that our recommendations are valid and should be fully addressed as discussed below. NASA also provided technical comments which we incorporated as appropriate. NASA partially concurred with our first recommendation to establish a separate cost and schedule baseline for work required to support the SLS Block I EM-2, report this information to the Congress through NASA's annual budget submission, and establish separate life cycle cost and schedule baseline estimates for EM-2 if NASA decides to fly Block I beyond EM-2. NASA also partially concurred with our second recommendation to establish separate cost and schedule baselines that encompass life cycle costs, including operations and sustainment, for each additional SLS, Orion, and GSDO capability and to report cost estimates for the capabilities annually via the agency budget submission until key requirements are defined and baselines can be established. In its response, NASA stated that it had established separate programs for SLS, Orion, and GSDO and adopted a block upgrade approach for SLS. This approach, NASA stated, is in concert with best practices and NASA policy. In addition, NASA indicated that it will establish cost and schedule estimates for initial demonstration of the three programs as they enter respective implementation phases and will begin reporting development, operations, and sustainment costs for SLS Block I and subsequent variants starting in fiscal year 2016 via its annual budget submission to Congress. Finally, the agency stated that it intends to conduct design reviews for upgraded SLS elements, including the upper stage and booster, and set up cost commitments similar to what it has done for Block I capability as part of that design review process, but that it does not intend to establish life cycle estimates for SLS through the end of the program because flight rates, mission destinations and other strategic parameters are yet unknown. As discussed in the report, best practices for cost estimating recognize that NASA’s evolutionary development approach for SLS, Orion, and GSDO helps reduce risk and provide capabilities more quickly. Given NASA’s planned long-term use of the SLS, Orion, and GSDO, its block upgrade approach and intention to conduct design reviews for each of the planned upgrades will provide some understanding of the development work and resources required. For example, such reviews are typically expected to yield information about technical progress against requirements. While NASA's prior establishment of SLS, Orion, and GSDO as separate programs lends some insight into expected costs and schedule at the broader program level, it does not meet the intent of our first two recommendations because cost and schedule identified at that level is unlikely to provide the detail necessary to monitor the progress of each block against a baseline. Furthermore, it is unclear from NASA's response whether the cost commitments the agency plans within the design review process will serve the same purpose as establishing a cost baseline for each respective upgrade. Additionally, NASA's planned approach for reporting costs associated with EM-2 and subsequent variants of SLS via its annual budget submission only partially meets the intent of our first two recommendations. Providing cost information at an early phase when baseline estimates have yet to be established is helpful to ensure costs associated with EM-1 and EM-2 are not conflated and funding requirements for future flights of the Block I SLS and future variants are somewhat understood. Reporting the costs via the budget process alone, however, will not provide information about potential costs over the long- term because budget requests neither offer all the same information as life cycle cost estimates nor serve the same purpose. Plainly, progress cannot be assessed without a baseline that serves as a means to compare current costs against expected costs. An agency’s budget submission reflects its current annual fiscal needs and anticipated short- term needs up through an additional 4-year period for a particular program, is subject to change based on fiscal negotiation, and is not necessarily linked to an established baseline that indicates how much the agency expects to invest to develop, operate, and sustain a capability over the long-term. Conversely, life cycle cost estimates establish a full accounting of all program costs for planning, procurement, operations and maintenance, and disposal and provide a long-term means to measure progress over a program’s life span. As NASA establishes parameters for the additional flights of the first SLS capability and upgraded capabilities, including flight rates, mission destinations, and other requirements, it will be well-poised to move from reporting costs in budget submissions to establishing baseline cost and schedule estimates for each capability and reporting progress against these respective baselines. Therefore, we continue to believe that NASA should baseline costs for EM-2 and each future variant of SLS and report progress against those established baselines. NASA makes no specific mention of how it plans to account for future work associated with Orion and GSDO. We believe it is important to treat Orion and GSDO with the same significance as SLS because this trio of programs is expected to work in concert now and in the future to achieve NASA’s goals for human space exploration. Reporting Orion and GSDO development, operations, and sustainment costs in the annual budget request, as NASA plans for SLS, would be a logical first step. Just as with SLS, however, it will be important for NASA to establish and report progress against baseline costs and schedules for each block of Orion and GSDO efforts as flight rates, missions, and other strategic parameters are defined because doing so will help the agency more effectively manage not only each program but its human exploration portfolio as a whole. NASA partially concurred with our third recommendation to include the costs of Orion development work under the Constellation program as part of the baseline cost estimate for the Orion program. Agency officials stated that they agree those costs should be tracked and disclosed, but that the current Orion program has a new concept of operations, requirements, and budget plan than that under the Constellation effort. The past costs incurred for Orion’s development are important because they provide visibility into the total cost of developing a crew capsule for human space exploration. Exclusion of these costs from Orion’s current estimate understates how much NASA will invest to put humans into space. Although NASA notes that it has changed Orion’s concept of operations and requirements, the agency nonetheless migrated Orion critical technology development efforts from Constellation to the SLS program. For example, NASA began efforts to develop the coating for Orion’s heat shield as part of Constellation, and the agency continues that development today in preparation for the capsule’s launch atop SLS. Therefore, we continue to believe our recommendation to include Orion development costs under Constellation in the baseline cost estimate for the current Orion program is valid and should be fully implemented. We are sending this report to NASA’s Administrator and to interested congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other key contributors to this report are listed in Appendix II. Key contributors to this report were Shelby S. Oakley, Assistant Director; Tana M. Davis; John S. Warren, Jr.; Jennifer Echard; Laura Greifner; Roxanna Sun; and Sylvia Schatz.
NASA is undertaking a trio of closely-related programs to continue human space exploration beyond low-Earth orbit: the SLS vehicle; the Orion capsule, which will launch atop the SLS and carry astronauts; and the supporting ground systems. As a whole, the efforts represent NASA's largest exploration investment over the next decade, potentially as much as $22 billion, to demonstrate initial capabilities. Beyond 2021, NASA plans to incrementally develop progressively more-capable SLS launch vehicles complemented by Orion capsules and ground systems. GAO was asked to assess the costs of NASA's human exploration program. This report examines the scope of NASA's preliminary cost estimates for the three programs. To conduct this work, GAO reviewed NASA information on cost estimates for the three programs, discussed the estimates with NASA officials, and assessed the estimates against best practices criteria in GAO's cost estimating guidebook as well as NASA's own requirements and guidance. The scope of the National Aeronautics and Space Administration's (NASA) preliminary cost estimates for the Space Launch System (SLS), Orion Multi-Purpose Crew Vehicle (Orion), and associated ground systems encompasses only the programs' initial capabilities and does not include the long-term, life cycle costs associated with the programs or significant prior costs: The SLS estimate is based on the funding required to develop and operate the initial 70-metric ton variant through first flight in 2017 but not the costs for its second flight in 2021. NASA is now incurring some costs related to the second flight, but it is not currently tracking those costs for life cycle cost estimating purposes. Furthermore, the estimate does not include costs to incrementally design, develop, and produce future 105- and 130-metric ton SLS variants which NASA expects to use for decades. NASA is now funding concept development and analysis related to these capabilities. The Orion estimate does not include costs for production, operations, or sustainment of additional crew capsules, despite plans to use and possibly enhance this capsule after 2021. It also does not include $4.7 billion in prior costs incurred during the approximately 4 years when Orion was being developed as part of NASA's now-defunct Constellation program. The ground systems estimate excludes costs to develop or operate the ground systems infrastructure beyond 2017, although NASA intends to modify ground architecture to accommodate all SLS variants. NASA expects to use this same limited scope of work to establish the programs' baseline cost estimates in 2014. According to NASA, the agency is developing a tailored definition for the programs' life cycle cost estimates as allowed by NASA requirements. Agency officials stated that NASA chose its approach in part due to uncertainties about the programs' end dates and missions beyond 2021. GAO recognizes that defining life cycle costs can be difficult when uncertainties exist, and that best practices for cost estimating look favorably on evolutionary development. Even so, best practices expect that a high-quality cost estimate will account for program uncertainties, forecast a minimum and maximum range for all life cycle costs, and clearly define the characteristics of each increment of capability so that a rigorous life cycle cost estimate can be developed. According to these practices as well as NASA's requirements and guidance, life cycle cost estimates should encompass all past, present, and future costs for a program, including costs for operations, support, and disposal. The limited scope that the agency has chosen for constructing preliminary and baseline cost estimates, however, means that the estimates are unlikely to serve as a way to measure progress and track cost growth over the life of the programs. For example, cost growth on the current SLS variant could be masked as the addition of scope associated with work for future variants, and the baseline estimate would no longer be applicable. Insight into program costs helps decision makers understand the long-term affordability of programs—a key goal of the National Space Transportation Policy—and helps NASA assess management of its portfolio to achieve increasing capabilities as directed in the NASA Authorization Act of 2010. NASA should establish separate cost baselines that address the life cycle of each SLS increment, as well as for any evolved Orion or ground systems capability, among other actions to enable assessment of affordability and enhance oversight. In commenting on a draft of this report, NASA partially concurred with GAO's recommendations, citing that actions taken to structure the programs and track costs met their intent. However, GAO believes NASA's responses do not fully address the issues raised in this report.
In May 2003, the Coalition Provisional Authority (CPA) dissolved the military organizations of the former regime and began the process of reestablishing or creating new Iraqi security forces, including the police and a new Iraqi army. Over time, multinational force commanders assumed responsibility in their areas for recruiting and training some Iraqi defense and police forces. In October 2003, the multinational force outlined a multistep plan for transferring security missions to Iraqi security forces. The plan had the objective of gradually decreasing the number of coalition forces in conjunction with neutralizing Iraq’s insurgency and developing Iraqi forces capable of securing their country. Citing the growing capability of Iraqi security forces, coalition forces in Iraq began to shift responsibilities to Iraqi security forces in February 2004, earlier than planned. According to the President, senior DOD officials, and multinational force commanders, Iraqi forces were unprepared to assume security responsibilities and responded poorly to a series of anti-coalition attacks in April 2004. In western and central Iraq, insurgents attacked the multinational force in Fallujah, Baghdad, Ar Ramadi, Samarra, and Tikrit, while a radical Shi’a militia, the Mahdi Army, launched operations to dislodge multinational forces and occupy cities from Baghdad to Basra in the south. Although some Iraqi forces fought alongside coalition forces, other units abandoned their posts and responsibilities and, in some cases, assisted the insurgency. MNF-I identified a number of problems that contributed to the collapse of Iraqi security forces, including problems in training and equipping them. In May 2004, the President issued a National Security Presidential Directive, which stated that, after the transition of power to the Iraqi government, DOD would be responsible for U.S. activities relating to security and military operations. The Presidential directive established that the U.S. Central Command (CENTCOM) would direct all U.S. government efforts to organize, equip, and train Iraqi security forces. In the summer of 2004, MNF-I developed and began implementing a comprehensive campaign plan, which elaborated and refined the original strategy for transferring security responsibilities to Iraqi forces. In April 2006, MNF-I revised the campaign plan and, in conjunction with the U.S. Embassy in Baghdad, issued a new Joint Campaign Plan that contains the goal of transitioning security responsibility from MNF-I to the Iraqi security forces and government. Further details on the campaign plan are classified. In late August 2006, the MNF-I Commanding General said that the United States is helping Iraq build a force to deal with its current security threats of international terrorism and insurgency. He noted, however, that the Iraqi government is developing a long-term security plan to shape the type of armed forces that the country will need 5 to 10 years from now. Since June 2003, overall security conditions in Iraq have deteriorated and grown more complex, as evidenced by increased numbers of attacks and more recent Sunni/Shi’a sectarian strife after the February 2006 bombing of the Golden Mosque in Samarra. The deteriorating conditions threaten continued progress in U.S. and other international efforts to assist Iraq in the political and economic areas. Moreover, the Sunni insurgency and Shi’a militias have contributed to an increase in sectarian strife and large numbers of Iraqi civilian deaths and displaced individuals. Enemy-initiated attacks against the coalition and its Iraqi partners have continued to increase through July 2006 (see fig. 1). Since 2003, enemy- initiated attacks have increased around major religious or political events, including Ramadan and elections. Attack levels also follow a seasonal pattern, increasing through the spring and summer and decreasing in the fall and winter months. Overall, attacks increased by 23 percent from 2004 to 2005. After declining in the fall of 2005, the number of attacks rose to the highest level ever in July 2006. Total attacks reported from January 2006 through July 2006 were about 57 percent higher than the total reported during the same period in 2005. These data show significant increases in attacks against coalition forces, who remain the primary targets, as well as civilians and Iraqi security forces. According to a June 2006 UN report, an increasingly complex armed opposition continues to be capable of maintaining a consistently high level of violent activity across Iraq. Baghdad, Ninewa, Salahuddin, Anbar, and Diyala have been experiencing the worst of the violence. Other areas, particularly Basra and Kirkuk, have witnessed increased tension and a growing number of violent incidents. In August 2006, DOD reported that breaking the cycle of violence is the most pressing immediate goal of coalition and Iraqi operations. The security situation has deteriorated even as Iraq has made progress in meeting key political milestones and in developing its security forces. Since the CPA transferred power to the Iraqi interim government in June 2004, Iraq has held an election for a transitional government in January 2005, a referendum on the constitution in October 2005, and an election for a Council of Representatives in December 2005 that led to the formation of a new government in May 2006 (see fig. 2). However, according to the Director of the Defense Intelligence Agency (DIA), the December 2005 elections appeared to heighten sectarian tensions and polarize sectarian divides. According to a U.S. Institute of Peace report, the focus on ethnic and sectarian identity has sharpened as a result of Iraq’s political process, while nationalism and a sense of Iraqi identity have weakened. Moreover, according to the Director of National Intelligence’s February 2006 report, Iraqi security forces are experiencing difficulty in managing ethnic and sectarian divisions among their units and personnel. In addition, the DIA Director reported that many elements of the Iraqi security forces are loyal to sectarian and party interests. According to DOD’s August 2006 report, sectarian lines among Iraqi security forces are drawn along geographic lines, with Sunni, Shi’a, or Kurdish soldiers mostly serving in units located in geographic areas familiar to their group. Moreover, according to the report, commanders at the battalion level tend to command only soldiers of their own sectarian or regional background. On August 7, 2006, MNF-I and Iraqi security forces began phase II of Operation Together Forward. The operation is an effort to reduce the level of murders, kidnappings, assassinations, terrorism, and sectarian violence in Baghdad and to reinforce the Iraqi government’s control of the city. On August 30, 2006, the MNF-I Commanding General said that he was pleased with the operation’s progress, but that there was a long way to go in bringing security to the neighborhoods of Baghdad. U.S. intelligence assessments of this operation’s impact are classified. The State Department reported in July 2006 that the recent upturn in violence has hindered the U.S. government’s efforts to engage fully with its Iraqi partners and to move forward on political and economic fronts. State noted that a baseline of security was a prerequisite for moving forward on these fronts, which are essential to achieving the right conditions for withdrawing U.S. forces. For example, Iraqi government efforts to foster reconciliation have become more difficult with the increase in sectarian divisions and violence during the spring and summer of 2006. According to DOD’s August 2006 report, security issues—such as the attempted kidnapping of a deputy minister and threats to personnel who work with embassy teams—have made some ministers reluctant to have U.S. personnel visit them. The report also noted that the security situation in some provinces has hampered interaction between U.S.-led Provincial Reconstruction Teams and provincial leaders. Moreover, the UN reported that the lack of security has hampered reconstruction efforts. The UN reported that the diplomatic community remains under serious threat as embassy staff have been abducted and killed and facilities attacked. The UN noted that improved security is central to the normal ability of international agencies to provide assistance to the government and people of Iraq. As we reported in July 2006, the poor security conditions have also hindered U.S. and Iraqi government efforts to revitalize Iraq’s economy and restore essential services in the oil and electricity sectors. According to a State Department report, during the week of August 16-22, 2006, Iraq was producing 2.17 million barrels of oil per day. This figure is below the Iraqi Oil Ministry’s goal of 2.5 million barrels of oil per day and the pre-war level of 2.6 million barrels per day. Over the same time period, electricity availability averaged 5.9 hours per day in Baghdad and 10.7 hours nationwide. Electricity output for the week was about 9 percent above the same period in 2005. U.S. officials report that major oil pipelines continue to be sabotaged, shutting down oil exports and resulting in lost revenues. Current U.S. assistance is focused on strengthening the Strategic Infrastructure Battalions, which are Ministry of Defense forces that protect oil fields and pipelines. Major electrical transmission lines have also been repeatedly sabotaged, cutting power to parts of the country. Security conditions in Iraq have, in part, led to project delays and increased costs for security services. Although it is difficult to quantify the costs and delays resulting from poor security conditions, both agency and contractor officials acknowledged that security costs have diverted a considerable amount of reconstruction resources and have led to canceling or reducing the scope of some reconstruction projects. Although the Sunni insurgency has remained strong and resilient, the presence and influence of Shi’a militias have grown and led to increased sectarian violence. According to a July 2006 State Department report, the Sunni insurgency remains a pressing problem in Iraq. However, in recent months, Shi’a militia groups have grown more prominent and threaten Iraq’s stability. The increase in sectarian violence has led to an increasing number of Iraqis fleeing their homes. According to the U.S. Ambassador to Iraq, the demobilization of Shi’a militias requires a corresponding reduction in the Sunni insurgency. Despite coalition efforts and the efforts of the newly formed Iraqi government, insurgents continue to demonstrate the ability to recruit new fighters, supply themselves, and attack coalition and Iraqi security forces. According to a July 2006 State Department report, the Sunni insurgency remains a pressing problem in Iraq, even after the death of Abu Musab al Zarqawi, the leader of al-Qaeda in Iraq, in early June 2006. As DOD recently reported, al-Qaeda in Iraq remains able to conduct operations due to its resilient, semi-autonomous cellular structure of command and control. The Sunni insurgency consists of former Baathists, whose goal is to return to power; terrorist groups such as al-Qaeda in Iraq, its affiliates in the Mujahadeen Shura Council, and Ansar al Sunna; and various other groups that rely on violence to achieve their objectives. Sunni insurgents have no distinct leader but share the goal of destabilizing the Iraqi government to pursue their individual and, at times, conflicting goals. Although these groups have divergent goals, some collaborate at the tactical and operational levels. DOD has reported that the relationships among insurgents, terrorists, and criminal opportunists are blurred at times but that the ideological rifts between terrorists and other resistance groups remain. DOD also reports that many insurgent groups employ a dual-track political and military strategy to subvert emerging institutions and to infiltrate and co-opt security and political organizations. These groups attempt to leverage the political process to address their core concerns and demands while attacking coalition and Iraqi security forces. The presence and influence of Shi’a militia groups have grown in recent months, as they have become more prominent and acted in ways that threaten Iraq’s stability. According to the CENTCOM Commander, as of early August 2006, these militias are the largest contributors to sectarian violence in Iraq. As DOD reported in August 2006, the threat posed by Shi’a militias is growing and represents a significant challenge for the Iraqi government. The Shi’a militias that are affecting the security situation the most are the Mahdi Army and the Badr Organization. Mahdi Army: Led by radical Shi’a cleric Muqtada al-Sadr, this group was responsible for attacks against the coalition and two uprisings in April 2004 and August 2004. The militia committed abuses against Sunni civilians, which have exacerbated sectarian tensions, and were implicated in unrest following the February bombing in Samarra. Evidence exists that the Mahdi Army are supplied by sources outside Iraq, most notably Iran. As of June 2006, Sadr followers headed four of Iraq’s 40 ministries—the ministries of health, transportation, agriculture, and tourism and antiquities. As DOD recently reported, this militia has popular support in Baghdad and Iraq’s southern provinces and is tolerated by elements in the Iraqi government. Badr Organization: This Shi’a militia group is the paramilitary wing of the Supreme Council for the Islamic Revolution in Iraq, a prominent political party in the new government. The party was founded in Iran during the Iran-Iraq war and retains strong ties to Iran. According to DOD, the Badr Organization received financial and material support from Iran, and individuals from Badr have been implicated in death squads. The Supreme Council for the Islamic Revolution in Iraq is one of the two largest Shi’a parties in parliament. One of Iraq’s two deputy presidents and the Minister of Finance are party members. According to the CENTCOM Commander, Shi’a militias must be controlled because they are nonstate actors that have the attributes of the state, yet bear no responsibility for their actions. In many cases, according to DOD, militias provide protection for people and religious sites, sometimes operating in conjunction with the Iraqi police in areas where the Iraqi police are perceived to provide inadequate support. According to a May 2006 DOD report, Shi’a militias seek to place members into army and police units as a way to serve their interests. This is particularly evident in the Shi’a dominated south where militia members have hindered the implementation of law enforcement. Militia leaders also influence the political process through intimidation and hope to gain influence with the Iraqi people through politically based social welfare programs. In areas where they provide social services and contribute to local security, they operate openly and with popular support. According to the Director of National Intelligence, Iran provides guidance and training to select Iraqi Shi’a political groups and provides weapons and training to Shi’a militant groups to enable anticoalition attacks. Iran also has contributed to the increasing lethality and effectiveness of anticoalition attacks by enabling Shi’a militants to build improvised explosive devices with explosively formed projectiles, similar to those developed by Lebanese Hezbollah. Iranian support for Shi’a militias reinforces Sunni fears of Iranian domination, further elevating sectarian violence. According to the August 2006 DOD report, Sunni Arabs do not have formally organized militias. Instead, they rely on neighborhood watches, Sunni insurgents, and increasingly, al-Qaeda in Iraq. The rise of sectarian attacks is driving some Sunni and Shi’a civilians in Baghdad and in ethnically mixed provinces to support militias. Such support is likely to continue, according to DOD’s report, in areas where the population perceives Iraqi institutions and forces as unable to provide essential services or meet security requirements. According to DOD’s August 2006 report, rising sectarian strife defines the emerging nature of violence in mid-2006, with the core conflict in Iraq now a struggle between Sunni and Shi’a extremists seeking to control key areas in Baghdad, create or protect sectarian enclaves, divert economic resources, and impose their own respective political and religious agendas. The UN reported in March 2006 that the deteriorating security situation is evidenced by increased levels of sectarian strife and the sectarian nature of the violence, particularly in ethnically mixed areas. Figure 3 shows the ethnic distribution of the population in Iraq. Baghdad, Kirkuk, Mosul, and southwest of Basra are key ethnically mixed areas. In June 2006, the UN reported that much of the sectarian violence has been committed by both sides of the Sunni-Shi’a sectarian divide and has resulted in increased civilian deaths. The UN reported that the number of Iraqi civilian casualties continues to increase, with a total of about 14,300 civilians killed in Iraq from January to June 2006. The overwhelming majority of casualties were reported in Baghdad, according to the report. Specifically targeted groups included prominent Sunni and Shi’a Iraqis, government workers and their families, members of the middle class (such as merchants and academics), people working for or associated with MNF- I, and Christians. According to the UN, daily reports of intercommunal intimidation and murder include regular incidents of bodies of Sunni and Shi’a men found to be tortured and summarily executed in Baghdad and its surrounding areas. Violence against Kurds and Arabs has also been reported in Kirkuk, while the abduction and intimidation of ordinary Iraqis is a growing problem. According to the report, repeated bombings against civilians, mosques, and more recently against churches are creating fear, animosity, and feelings of revenge within Iraq’s sectarian communities. Moreover, according to a July 2006 UN report, the increase in sectarian violence has resulted in a growing number of Iraqis fleeing their homes. The UN estimated that about 150,000 individuals had been displaced as of June 30, 2006. The UN reported that people left their community of origin primarily because of direct or indirect threats against them or attacks on family members and their community. According to the report, displaced persons are vulnerable, lack many basic rights, and compete for limited services. This in turn can increase intercommunal animosities and can generate further displacement. Although U.S. and UN officials recognize the importance of demobilizing the militias, the U.S. Ambassador to Iraq has stated that the demobilization of the Shi’a militias depends on a reduction in the Sunni insurgency. According to the Ambassador, a comprehensive plan for demobilizing all the militias and reintegrating them into Iraqi society is needed to ensure Iraq’s stability and success. However, the Sunni insurgent groups now see themselves as protectors of the Sunni community, and the Shi’a militias see themselves as protectors of the Shi’a community. As DOD reported in August 2006, Sunni and Shi’a extremists are locked in mutually reinforcing cycles of sectarian strife, with each portraying themselves as the defenders of their respective sectarian groups. DOD and State report progress in developing capable Iraqi security forces and transferring security responsibilities to them and the Iraqi government in three key areas: (1) the number of trained and equipped forces, (2) the number of Iraqi army units and provincial governments that have assumed responsibility for security of specific geographic areas, and (3) the assessed capabilities of operational units, as reported in aggregate Transition Readiness Assessment (TRA) reports. While all three provide some information on the development of Iraqi security forces, they do not provide detailed information on specific capabilities that affect individual units’ readiness levels. Unit-level TRA reports provide that information. We are currently working with DOD to obtain these reports because they would more fully inform both GAO and the Congress on the capabilities and needs of Iraq’s security forces. DOD and State have reported progress toward the current goal of training and equipping about 325,000 Iraqi security forces by December 2006. As shown in table 1, the State Department reports that the number of trained army and police forces has increased from about 174,000 in July 2005 to about 294,000 as of August 2006. According to State, the Ministries of Defense and Interior are on track to complete the initial training and equipping of all their authorized end-strength forces by the end of 2006. The authorized end-strength is 137,000 military personnel in the Ministry of Defense and about 188,000 in Ministry of Interior police and other forces. However, as we previously reported, the number of trained and equipped security forces does not provide a complete picture of their capabilities and may overstate the number of forces on duty. For example, Ministry of Interior data include police who are absent without leave. Ministry of Defense data exclude absent military personnel. In spring 2005, MNF-I recognized that the number of trained and equipped forces did not reflect their capability to assume responsibility for security. MNF-I began to develop and refine the TRA system as a means of assessing the capabilities of Iraqi security forces. It also started a program to place transition teams with Iraqi army and special police units. DOD also assesses progress in the number of Iraqi army units and provincial governments that have assumed responsibility for the security of specific geographic areas in Iraq. The joint MNF-I/U.S. Embassy Campaign Plan calls for the Iraqi army to assume the lead for counterinsurgency operations in specific geographic areas and Iraqi civil authorities to assume security responsibility for their provinces. The transition of security responsibilities concludes when the Iraq government assumes responsibility for security throughout Iraq. As shown in table 2, DOD reports that an increasing number of Iraqi army units are capable of leading counterinsurgency operations in specific geographic areas. DOD reports more detailed information on this transition in a classified format. However, when an Iraqi army unit assumes the lead, it does not mean that the unit is capable of conducting independent operations since it may need to develop additional capabilities and may require the support of coalition forces. According to DOD’s May 2006 report, it will take time before a substantial number of Iraqi units are assessed as fully independent and requiring no assistance considering the need for further development of Iraqi logistical elements, ministry capacity and capability, intelligence structures, and command and control. Table 2 also shows that one provincial government—Muthanna—had assumed responsibility for security operations, as of August 2006. According to a July 2006 State Department report, when a provincial government can assume security responsibilities depends on the (1) threat level in the province, (2) capabilities of the Iraqi security forces, (3) capabilities of the provincial government, and (4) posture of MNF-I forces, that is, MNF-I’s ability to respond to major threats, if needed. Once the provincial government assumes security responsibilities, the provincial governor and police are in charge of domestic security. According to an MNF-I official, MNF-I forces will then move out of all urban areas and assume a supporting role. In August 2006, DOD reported that security responsibility for as many as nine of Iraq’s provinces could transition to Iraqi government authority by the end of 2006. DOD has provided GAO with aggregate information on the overall TRA levels for Iraqi security forces and the number of Iraqi units in the lead for counterinsurgency operations. DOD’s aggregate data on the capabilities and readiness of Iraqi security forces do not provide information on shortfalls in personnel, command and control, equipment, and leadership. Unit-level TRA reports provide more insight into Iraqi army capabilities and development needs in personnel, leadership, and logistics than do the overall TRA levels that DOD reports in classified format. The TRA rating for individual Iraqi army units is a key factor in determining the ability of the unit to conduct and assume the lead for counterinsurgency operations. According to Multinational Corps-Iraq (MNC-I) guidance, the TRA is intended to provide commanders with a method to consistently evaluate Iraqi units, as well as to identify factors hindering progress, determine resource issues, make resource allocation decisions, and determine when Iraqi army units are prepared to assume the lead for security responsibilities. The TRA is prepared jointly on a monthly basis by the unit’s military transition team chief and Iraqi security forces commander. In completing TRA reports, commanders assess the unit’s capabilities in six subcategories—personnel, command and control, training, sustainment/logistics, equipment, and leadership (see app. 1). After considering the unit’s subcategory ratings, commanders then give each Iraqi army unit an overall TRA rating that describes the unit’s overall readiness to assume the lead for counterinsurgency operations. The overall ratings go from TRA level 1 through TRA level 4. To be able to assume the lead for counterinsurgency operations, Iraqi army units are required to obtain an overall rating of TRA level 2 as assessed by their commanders. Commanders also provide a narrative assessment that describes key shortfalls and impediments to the unit’s ability to assume the lead for counterinsurgency operations and estimate the number of months needed for the unit to assume the lead. The purpose of the narrative is to clarify and provide additional support for the overall TRA rating. The aggregate data on overall TRA ratings for Iraqi security forces are classified. DOD has provided us with classified data on the aggregate number of Iraqi units at each TRA level and more detailed information on which Iraqi army units have assumed the lead for counterinsurgency operations. We are currently working with DOD to obtain the unit-level TRA reports. These unit-level reports would provide GAO and Congress with more complete information on the status of developing effective Iraqi security forces. Specifically, unit-level TRA reports would allow us to (1) determine if the TRA reports are useful and if changes are needed; (2) verify if aggregate data on overall TRA ratings reflect unit-level TRA reports; and (3) determine if shortfalls exist in key areas, such as personnel, equipment, logistics, training, and leadership. 1. What are the key political, economic, and security conditions that must be achieved before U.S. forces can draw down and ultimately withdraw from Iraq? What target dates, if any, has the administration established for drawing down U.S. forces? 2. The continued deterioration of security conditions in Iraq has hindered U.S. political and economic efforts in Iraq. According to the State Department, a baseline of security is a prerequisite for moving forward on the political and economic tasks essential to achieving the right conditions for withdrawing U.S. forces. Why have security conditions continued to deteriorate in Iraq even as the country has met political milestones, increased the number of trained and equipped security forces, and increasingly assumed the lead for security? What is the baseline of security that is required for moving forward on political and economic tasks? What progress, if any, can be made in the political and economic areas without a significant improvement in current security conditions? If existing U.S. political, economic, and security measures are not reducing violence in Iraq, what additional measures, if any, will the administration propose for stemming the violence? 3. In February 2006, the Director of National Intelligence reported that Iraqi security forces were experiencing difficulty in managing ethnic and sectarian divisions among their units and personnel. The DIA Director reported that many elements of the Iraqi security forces are loyal to sectarian and party interests. How does the U.S. government assess the extent to which personnel in the Iraqi security forces are loyal to groups other than the Iraqi government or are operating along sectarian lines, rather than as unified national forces? What do these assessments show? How would DOD modify its program to train and equip Iraqi security forces if evidence emerges that Iraqi military and police are supporting sectarian rather than national interests? 4. MNF-I established the TRA system to assess the capabilities and readiness of Iraqi security forces. How does DOD assess the reliability of TRAs and ensure that they present an accurate picture of Iraq security forces’ capabilities and readiness? At what TRA rating level would Iraqi army units not require any U.S. military support? What U.S. military support would Iraqi units still require at TRA levels 1 and 2? How does DOD use unit-level TRAs to assess shortfalls in Iraqi capabilities? What do DOD assessments show about the developmental needs of Iraqi security forces? 5. In late August 2006, the MNF-I Commanding General said that the United States is helping Iraq build a force to deal with its current security threats of international terrorism and insurgency. However, he noted that the Iraqi government is developing a long-term security plan to shape the type of armed forces the country will need 5 to 10 years from now. What are the current resource requirements for developing Iraqi security forces capable of dealing with international terrorism and insurgency? What have been the U.S. and Iraqi financial contributions to this effort thus far? What U.S. and Iraqi contributions will be needed over the next several years? What are the projected resource requirements for the future Iraqi force? What are the projected U.S. and Iraqi financial contributions for this effort? For further information, please contact Joseph A. Christoff on (202) 512- 8979. Key contributors to this testimony were Nanette J. Barton, Lynn Cothern, Tracey Cross, Martin De Alteriis, Whitney Havens, Brent Helt, Rhonda Horried, Judith McCloskey, Mary Moutsos, Jason Pogacnik, and Jena Sinkfield. This appendix provides information on the TRA reports used to assess the capabilities of Iraqi army units. Commanders provide ratings in each of 6 subcategories (see fig. 4). For each subcategory, a green rating corresponds to TRA level 1, yellow to TRA level 2, orange to TRA level 3, and red to TRA level 4. The commanders consider the subcategory ratings in deciding the overall TRA rating for each unit.
From fiscal years 2003 through 2006, U.S. government agencies have reported significant costs for U.S. stabilization and reconstruction efforts in Iraq. In addition, the United States currently has committed about 138,000 military personnel to the U.S.-led Multinational Force in Iraq (MNF-I). Over the past 3 years, worsening security conditions have made it difficult for the United States to achieve its goals in Iraq. In this statement, we discuss (1) the trends in the security environment in Iraq, and (2) progress in developing Iraqi security forces, as reported by the Departments of Defense (DOD) and State. We also present key questions for congressional oversight, including what political, economic, and security conditions must be achieved before the United States can draw down and withdraw? Why have security conditions continued to deteriorate even as Iraq has met political milestones, increased the number of trained and equipped forces, and increasingly assumed the lead for security? If existing U.S. political, economic, and security measures are not reducing violence in Iraq, what additional measures, if any, will the administration propose for stemming the violence? Since June 2003, the overall security conditions in Iraq have deteriorated and grown more complex, as evidenced by increased numbers of attacks and Sunni/Shi'a sectarian strife, which has grown since the February 2006 bombing in Samarra. As shown in the figure below, attacks against the coalition and its Iraqi partners reached an all time high during July 2006. The deteriorating conditions threaten the progress of U.S. and international efforts to assist Iraq in the political and economic areas. In July 2006, the State Department reported that the recent upturn in violence has hindered efforts to engage with Iraqi partners and noted that a certain level of security was a prerequisite to accomplishing the political and economic conditions necessary for U.S. withdrawal. Moreover, the Sunni insurgency and Shi'a militias have contributed to growing sectarian strife that has resulted in increased numbers of Iraqi civilian deaths and displaced individuals. DOD uses three factors to measure progress in developing capable Iraqi security forces and transferring security responsibilities to the Iraqi government: (1) the number of trained and equipped forces, (2) the number of Iraqi army units and provincial governments that have assumed responsibility for security in specific geographic areas, and (3) the capabilities of operational units, as reported in unit-level and aggregate Transition Readiness Assessments (TRA). Although the State Department reported that the number of trained and equipped Iraqi security forces has increased, these numbers do not address their capabilities. As of August 2006, 115 Iraqi army units had assumed the lead for counterinsurgency operations in specific areas, and one province had assumed control for security. Unit-level TRA reports provide insight into the Iraqi army units' training, equipment, and logistical capabilities. GAO is working with DOD to obtain the unit-level TRA reports. Such information would inform the Congress on the capabilities and needs of Iraq's security forces.
AGOA provides eligible SSA countries duty-free access to U.S. markets for more than 6,000 dutiable items in the U.S. import tariff schedules. SSA countries are defined in Section 107 of AGOA as the 49 sub-Saharan African countries potentially eligible for AGOA benefits listed in that provision. As a trade preference program, AGOA supports economic development in sub-Saharan Africa through trade and investment and encourages increased trade and investment between the United States and SSA countries as well as intra-SSA trade. In addition, AGOA benefits may lead to improved access to U.S. credit and technical assistance, according to the Department of Commerce’s website and officials from the Departments of Commerce and Labor. AGOA authorizes the President each year to designate an SSA country as eligible for AGOA trade preferences if the President determines that the country has met or is making continual progress toward meeting AGOA’s eligibility criteria, among other requirements. For the purposes of this report, we have organized the act’s eligibility criteria into three broad reform objectives: economic, political, and development (see table 1). In addition, the act requires that an SSA country be eligible for the Generalized System of Preferences (GSP) in order to be considered for AGOA benefits. The U.S. government’s African Growth and Opportunity Act Implementation Guide states that an SSA country must also officially request to be considered for AGOA benefits. Over the lifetime of AGOA, 47 of the 49 SSA countries listed in the act have requested consideration for AGOA eligibility, according to USTR officials. Figure 1 shows a map of Africa that identifies the 39 SSA countries that were eligible for AGOA benefits and the 10 SSA countries that were ineligible for AGOA benefits as of January 1, 2015. The U.S. government uses the annual eligibility review process and forum mandated by AGOA to engage with sub-Saharan African countries on their progress toward economic, political, and development reform objectives reflected in AGOA’s eligibility criteria. USTR manages the annual consensus-based review process, which begins by collecting information from the public and other agencies of the AGOA Implementation Subcommittee. For SSA countries experiencing difficulty meeting one or more eligibility criteria, the U.S. government may decide on specific engagement actions to encourage reforms in specific areas. Over the lifetime of AGOA, 13 countries have lost their AGOA eligibility, although 7 countries eventually had their eligibility restored. The U.S. government uses the annual AGOA Forum to further engage with representatives from sub-Saharan Africa on challenges and encourage progress on AGOA’s economic, political, and development reform objectives. The AGOA Implementation Subcommittee of the TPSC conducts the AGOA eligibility review annually to discuss whether a country has established or is making continual progress toward AGOA’s reform objectives and makes consensus-based recommendations on each country’s eligibility. USTR’s Office of African Affairs oversees the implementation of AGOA and chairs the AGOA Implementation Subcommittee. The full TPSC must review and approve the subcommittee’s recommendations. The recommendations are then forwarded to the U.S. Trade Representative for review and approval. Once the recommendations are approved, the U.S. Trade Representative sends the recommendations to the President. The President makes the final decision on AGOA eligibility. The flow diagram in figure 2 provides an overview of the AGOA eligibility review process, organized into three phases: (1) initiation and data collection, (2) development of subcommittee recommendations, and (3) review and approval by the TPSC, USTR, and the President. Phase 1: initiation and data collection. Generally, USTR begins the annual eligibility review process in September or October by requesting that the agencies that form the AGOA Implementation Subcommittee— the Departments of Agriculture, Commerce, Labor, State, and the Treasury; USAID; Council of Economic Advisers; and National Security Council—provide information about each country’s progress on reform objectives related to the eligibility criteria. USTR also requests public comments at this time. These agencies generally prepare and submit their reports to USTR by mid-October. State also distributes information collected by overseas staff on progress made by SSA countries on AGOA’s reform objectives to the other members of the subcommittee to help inform the development of their reports. Subcommittee agencies frequently provide in-depth information related to the AGOA eligibility criteria that are most pertinent to their specific mission but may also provide input related to other eligibility criteria. For example, while the Department of Labor’s reports primarily focus on labor issues, its reports on each country may also include information related to other eligibility criteria, such as human rights and the rule of law. (Table 2 identifies the primary focus of each subcommittee agency and the related AGOA reform objectives and corresponding eligibility criteria.) In phase 1 of the eligibility review process, USTR also publishes a notice in the Federal Register requesting public comment on SSA countries eligible to receive AGOA benefits. In 2013, USTR received 11 comments from a range of sources, including SSA governments, SSA private companies, a U.S. industry organization, a private U.S. citizen, a federation of U.S. unions, and a coalition of trade associations. The Federal Register notice and a presidential proclamation that finalizes eligibility decisions are the only components of the eligibility review process that are public. Phase 2: development of subcommittee recommendations. USTR compiles the information provided by each subcommittee agency in phase 1 into a paper on each country. These papers also include broad- ranging information that USTR staff provide and any public comments that USTR receives in response to its notice in the Federal Register. USTR distributes the country papers to members of the AGOA Implementation Subcommittee for review and discussion at the subcommittee meeting. Typically, the AGOA Implementation Subcommittee convenes in November to review each country’s progress in establishing or making continual progress toward AGOA’s reform objectives. Usually, over a period that may range from a few days to a few weeks, the subcommittee works through each agency’s priorities and viewpoints on each country’s progress on the eligibility criteria, according to agency officials. The duration of this phase varies depending on how quickly the agencies can reach consensus. Any differences in perspective regarding countries’ progress are discussed and consensus-based recommendations are reached. For example, the U.S. Department of Agriculture has regularly raised concerns about progress on economic reform objectives in certain SSA countries, such as import bans and procedures to control pests and diseases in agricultural products. Countries have received démarches or letters for such issues; however, the subcommittee has not recommended that a SSA country lose its AGOA eligibility because of market access issues, according to USTR officials. Phase 3: review and approval by the TPSC, the U.S. Trade Representative, and the President. The subcommittee’s recommendations are presented to the full TPSC for review and approval. After the TPSC reaches consensus, USTR staff prepare a decision memorandum for the U.S. Trade Representative’s approval. The TPSC, the U.S. Trade Representative, and the President have the authority to modify the subcommittee’s recommendations, according to agency officials. The U.S. Trade Representative prepares a decision memo with recommendations to the President for approval. Then, generally in December, the President issues a proclamation that implements any changes to SSA countries’ AGOA eligibility status. The proclamation is published in the Federal Register. Regardless of a country’s eligibility status, the U.S. government uses the eligibility review as one of many tools to initiate conversations with SSA countries about economic, political, and development reforms, according to agency officials. The subcommittee reviews each country individually, considering each country’s particular situation, to determine how best to encourage progress toward specific eligibility criteria. The TPSC reviews the subcommittee’s recommendations and makes the ultimate decision on specific actions the U.S. government can take to encourage countries to address particular concerns related to the eligibility criteria. For example, the TPSC may determine that the relevant U.S. ambassador, or other U.S. government official, should meet with appropriate country representatives. Other possible actions include issuing démarches or letters that describe the eligibility criteria concerns and outline actions the country may take to address those concerns. In some cases, the TPSC may recommend specific steps a country should take to maintain or restore its AGOA eligibility. After the TPSC’s concerns are communicated to the country, relevant U.S. government officials manage engagement with the country and report back to the subcommittee on the country’s progress. Although the eligibility review is annual, interim eligibility reviews may be held to gauge the progress countries are making on specific eligibility criteria. For example, in October 2011, an interim review reinstated AGOA eligibility for Côte d’Ivoire, Guinea, and Niger. All three countries had lost AGOA eligibility because of undemocratic changes in government and then regained eligibility following free and fair elections. The following example illustrates how the U.S. government uses the eligibility review process to engage with SSA countries on issues related to specific reform objectives: Swaziland was deemed eligible for AGOA in January 2001. However, several years ago, the U.S. government began engaging with Swaziland on concerns related to internationally recognized labor rights through a series of letters and démarches issued by USTR and State. Over the course of several years, Swaziland made some progress on labor issues, but conditions related to labor rights later deteriorated. U.S. government officials met several times with Swaziland officials to discuss steps to improve labor rights, including a USTR-led interagency trip in April 2014. In particular, the officials were concerned that Swaziland had failed to make continual progress in protecting freedom of association and the right to organize. The U.S. officials were also concerned by Swaziland’s use of security forces and arbitrary arrests to stifle peaceful demonstrations, and the lack of legal recognition for labor and employer federations. Despite U.S. efforts to engage with the country’s government, Swaziland failed to make the necessary reforms. In June 2014, an interim review resulted in the President declaring Swaziland ineligible, effective as of January 1, 2015. Over the lifetime of AGOA, 13 SSA countries have lost their AGOA eligibility for not meeting certain eligibility criteria, although 7 of these countries eventually had their AGOA eligibility restored. As of January 1, 2015, the 49 SSA countries fell into four categories based on their history of AGOA eligibility. (App. II provides a list of the SSA countries by eligibility status.) Eligibility lost and regained. Seven countries had lost AGOA eligibility at some time in the past but later regained it. Five of the countries experienced coups, one country lost eligibility after its President extended his term in violation of the country’s constitution, and one country lost eligibility because of political unrest and armed conflict. All seven countries had their AGOA beneficiary status restored following a return to democratic rule. (Fig. 3 provides additional information regarding SSA countries that have lost and regained AGOA eligibility.) Eligibility lost and not regained. Six SSA countries have lost and not regained AGOA eligibility. One lost eligibility following a coup; three were deemed ineligible because of concerns about human rights abuses; one lost eligibility because of issues with labor rights; and one country lost eligibility following political violence and armed conflict. (Fig. 4 provides additional information regarding SSA countries that have lost and not regained AGOA eligibility.) Eligibility never lost. About two-thirds of SSA countries, 32 of 49 countries have maintained their AGOA eligibility status since it was first granted. Six of 32 were not deemed eligible when AGOA was originally enacted in 2000. Although these countries had expressed interest in the AGOA trade preference program, they did not initially satisfy the eligibility criteria but later obtained eligibility for benefits under AGOA at different times. Never eligible. Four SSA countries have not been eligible for AGOA. Somalia and Sudan have not expressed official interest in the AGOA trade preference program, according to agency officials. Zimbabwe and Equatorial Guinea have not been deemed eligible because of concerns related to AGOA’s eligibility criteria. The AGOA Forum is required under AGOA. Its purpose is to foster close economic ties between the United States and SSA countries; however, the forum also supports AGOA reform objectives by holding sessions that specifically address AGOA eligibility criteria. The AGOA Forum is generally held in alternate years in the United States and sub-Saharan Africa and supports AGOA’s reform objectives by facilitating high-level dialogue between the U.S. and SSA governments. The forum also engages the business community and civil society organizations. Generally, the forum takes place over 2 to 3 days and includes three to eight plenary sessions and several breakout sessions as well as workshops. Speakers are typically high-level U.S. and SSA government officials; however, speakers also include officials representing organizations such as the African Union and the United Nations Economic Commission for Africa. A number of U.S. congressional delegations have also participated in the forum. Civil society and private sector groups such as the Economic Justice Network and the African Cotton and Textile Industries Federation also actively participate in the forums. The theme of the AGOA Forum changes from year to year, but the discussions are centered on strengthening the economic connection between the United States and sub-Saharan Africa. For example, the theme of the December 2003 forum, hosted by the United States was “Building Trade, Expanding Investment” and the theme of the August 2013 forum, hosted by the Ethiopian government, was “Sustainable Transformation through Trade and Technology.” The 2014 AGOA Forum consisted of a 1-day ministerial meeting that took place during the first U.S.-Africa Leaders Summit in Washington, D.C. This summit included leaders from SSA countries and other parts of Africa. (Table 3 provides the location and theme of each AGOA Forum from 2001 through 2014.) Although the annual AGOA Forums are trade-oriented, they also facilitate further engagement between the United States and SSA countries through dialogue about the reform objectives reflected in AGOA eligibility criteria. Throughout the years, AGOA Forum workshops have focused on a number of the eligibility criteria, including good governance, intellectual property rights, health care, and labor rights. For example, at the 2013 AGOA Forum in Addis Ababa, a session co-chaired by Liberian and U.S. senior government officials highlighted the importance of labor rights in achieving economic growth. As another example, breakout sessions at the 2009 and 2011 AGOA Forums focused on the relationship between good governance and the investment environment. During the forums, U.S. and SSA government officials also hold bilateral meetings to discuss specific issues related to AGOA’s reform objectives and eligibility criteria, according to agency officials. AGOA-eligible countries have fared better than ineligible countries on some economic development indicators since AGOA was enacted, according to our analysis of economic data for SSA countries that were eligible and ineligible for AGOA in 2012; however, AGOA’s impact on economic development is difficult to isolate when additional factors are taken into consideration. Other factors—such as the small share of AGOA exports in the overall exports of many AGOA-eligible countries, the role of petroleum exports in recent income growth, the quality of government institutions, and different levels of foreign aid and investment—make it difficult to isolate how much economic development can be attributed to AGOA. For example, AGOA exports are a small share of overall exports for the majority of AGOA-eligible countries, a fact that could limit AGOA’s impact on economic development in these countries. We found evidence that increasing energy prices may also have contributed to income growth within AGOA-eligible countries: from 2000 through 2012, the top three AGOA-eligible petroleum-exporting countries had a much higher growth rate for income per person than other AGOA-eligible countries. We also found that AGOA-eligible countries on average had higher governance scores and received more foreign aid and investment compared with ineligible countries. While these differences may have been facilitated by AGOA eligibility, they may also have contributed to economic development in AGOA-eligible countries, a possibility that makes it difficult to isolate AGOA’s impact on economic development. Both before and since AGOA was enacted in 2000, income per person has been higher in AGOA-eligible countries, on average, compared with ineligible countries. The average annual income per person for 37 AGOA-eligible countries was $876 in 2000, prior to AGOA’s implementation, and $1,132 in 2012. The variation in income per person among the eligible countries was large; for example, in 2012, Seychelles had the highest income per person at $14,303 and Burundi had the lowest at $153. For 8 AGOA ineligible countries, average annual income per person was $353 in 2000 and $450 in 2012. Among the ineligible countries, income per person also varied widely. In 2012, Equatorial Guinea had the highest income per person at $14,199, whereas the Democratic Republic of Congo had the lowest at $165. The average annual growth in income per person was slightly higher in AGOA-eligible countries: eligible countries’ income per person on average grew 2.2 percent per year from 2000 to 2012, compared with 2.1 percent per year in ineligible countries. Figure 5 shows trends in annual income per person from the enactment of AGOA through 2012, for eligible and ineligible countries. (For additional details on each country’s annual income per person before and after AGOA, see app. III.) Exports under AGOA have accounted for a small proportion of exports for most AGOA-eligible countries. Our analysis shows that in 2013 AGOA exports accounted for less than 0.5 percent of overall exports for the majority of countries—for these countries, the small proportion of AGOA exports in their overall exports could limit AGOA’s impact on economic development. Figure 6 shows the number of AGOA-eligible countries in 2013 separated into categories based on the level of their exports under AGOA, as a share of their overall exports. For example, 4 of the AGOA-eligible countries had no AGOA exports at all in 2013, and in the same year, AGOA accounted for less than 5 percent of overall exports for 26 other AGOA-eligible countries. In 2013, AGOA accounted for more than half of overall exports for only 1 country, Chad, a top petroleum exporter among AGOA-eligible countries. While AGOA-eligible countries have had higher income per person than ineligible countries, the fastest growth in income per person has been concentrated in a few petroleum-exporting AGOA-eligible countries. From 2001 to 2013, petroleum products accounted for over 80 percent of U.S. imports under AGOA. Among AGOA-eligible countries, we identified Nigeria, Angola, and Chad as the top three petroleum exporting countries based on trade data in 2013. These countries collectively accounted for 90 percent of all petroleum exports to the United States under AGOA in 2013. When we separated out these countries in our analysis, we found that from 2001 through 2012 the top three AGOA-eligible petroleum exporting countries as a group had, on average, slightly lower levels of annual income per person compared with all other AGOA-eligible countries considered as a group: $960 versus $1,026. However, figure 7 shows that from 2000 through 2012, these top three petroleum exporters had a much higher average annual growth rate as measured in income per person compared with the other AGOA-eligible countries: 4.5 percent per year versus 1.4 percent per year. The difference in income-per- person growth between the top three petroleum exporters and the other AGOA-eligible countries can be explained partly by rising energy prices.From 2000 through 2012, global prices for petroleum increased by 272 percent. Prior to AGOA’s implementation in 2000, the group of SSA countries eligible for AGOA benefits in 2012 had higher governance scores than ineligible countries. Academic studies have found a positive relationship between the quality of governance institutions and economic growth. Therefore, gains in economic growth since 2000 among AGOA-eligible countries may have been driven to some degree by governance that was more conducive to economic development. We analyzed two measures of institutional quality from the Worldwide Governance Indicators that capture some aspects of the security of private property, namely scores for the rule of law and political stability. We found that AGOA-eligible countries had substantially higher scores on both rule of law and political stability in 2000 than countries that were not eligible for AGOA (see fig. 8). Pre-existing differences in institutional quality scores could explain in part why AGOA-eligible countries on average had higher annual income per person and slightly higher growth in annual income per person after the implementation of AGOA. According to our analysis of the AGOA eligibility review process, given that governance is considered in the annual AGOA eligibility review, AGOA-eligible countries may also have benefited from an ongoing incentive to sustain or improve the quality of their governance institutions. Figure 8 shows that the differences in governance scores between eligible and ineligible countries in 2012 were similar to those in 2000. These persistent differences in the quality of governance institutions could also have contributed to the differences in economic growth between AGOA-eligible countries and ineligible countries after the implementation of AGOA. AGOA-eligible countries on average have received more foreign aid per person and higher foreign direct investment (FDI) than ineligible countries since the implementation of AGOA. The different levels of foreign aid and FDI, which could play a role in economic development and poverty reduction, also may have contributed to the differences in income per person between AGOA-eligible countries and ineligible countries that we observed. Moreover, according to our analysis of aid and investment flows to SSA countries (below), being eligible for AGOA may have improved the ability of countries to attract aid and investment. Our analysis shows that on average AGOA-eligible countries received more foreign aid per person than ineligible countries. We analyzed data on country programmable aid (CPA) from the Organisation for Economic Co-operation and Development (OECD). According to the OECD, CPA captures the main cross-border aid flows to recipient countries and excludes some forms of official development assistance that are neither fully transparent to, nor manageable by, recipient countries, including humanitarian aid in response to crises and natural disasters, and debt relief provided by donor nations. The United States allocated an estimated $7.04 billion in U.S. bilateral aid to Africa in fiscal year 2014. The aid was intended to help SSA countries in areas including health; climate change; food security; and, more recently, power. From 2000 to 2012, AGOA-eligible countries received more than twice as much aid per person on average than ineligible countries (see fig. 9). AGOA-eligible countries on average also received more FDI than ineligible countries. According to a 2014 U.S. International Trade Commission report, global inflows from FDI into SSA countries increased almost sixfold between 2000 and 2012. We analyzed FDI as a share of a country’s gross domestic product (GDP) to take into consideration the size of the country’s economy. From 2001 to 2013, the amount of FDI each SSA country received relative to the size of its overall economy varied considerably. For example, among SSA countries that were net recipients of FDI in 2013, Burundi received FDI amounting to less than half a percent of its GDP (the lowest in sub-Saharan Africa), while Liberia received FDI amounting to about 57 percent of its GDP (the highest in sub-Saharan Africa). From 2001 through 2013, AGOA-eligible countries received FDI that on average amounted to about 5.6 percent of GDP, while ineligible countries averaged about 2.7 percent. (See fig. 10.) Being eligible for AGOA may help a country attract aid and investment. For example, AGOA eligibility can be seen as a signal of a relatively stable political environment as well as advantages in tariff treatment for certain products. According to a recent report by the U.S. International Trade Commission, AGOA has signaled improvements in the business and investment climate in SSA countries, and has contributed to increasing FDI flows to these countries. Additionally, the International Monetary Fund reported in June 2014 that in Swaziland uncertain prospects for AGOA eligibility could affect investment and employment in the textile sector. Similarly, Ethiopian government officials in the Ministry of Trade said that AGOA has helped to attract foreign direct investment to Ethiopia. Our analysis of factors contributing to economic development in SSA countries and review of academic literature suggest that isolating AGOA’s impact on overall economic development is difficult. We found that on average, AGOA-eligible countries have had higher annual income per person and slightly higher growth rates in annual income per person than ineligible countries; we also found evidence suggesting that AGOA eligibility might be associated with other factors that also can positively affect development. For example, our review of academic literature indicated that increased FDI could enhance countries’ economic growth, and our analysis demonstrated that on average AGOA-eligible countries receive more FDI inflows relative to the size of their economies. We are not making any recommendations in this report. We provided a draft of this report for comment to the Departments of Agriculture, Commerce, Labor, State, and the Treasury; USAID; and the Office of the U.S. Trade Representative (USTR). The Departments of Labor, State, the Treasury, and USTR provided technical comments, which we have incorporated in the report, as appropriate. We are sending copies of this report to appropriate congressional committees; the Secretaries of Agriculture, Commerce, Homeland Security, Labor, State, and the Treasury; the Administrator of USAID; and the U.S. Trade Representative. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601 or melitot@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our objectives were to examine (1) how the African Growth and Opportunity Act (AGOA) eligibility review process has considered and the AGOA Forums have supported economic, political, and development reform objectives described in the act and (2) how sub-Saharan African (SSA) countries have fared in certain economic development outcomes since the enactment of AGOA. To examine how the U.S. government’s process for determining AGOA eligibility and the AGOA Forums have supported reform objectives established in sections 104 and 105 of the act, we reviewed the AGOA legislation and documents from the seven U.S. agencies relevant to the AGOA eligibility criteria. We analyzed the AGOA eligibility status of SSA countries and the implementation of AGOA Forum activities since AGOA’s original enactment to identify changes in eligibility from 2000 through January 2015. We also attended and observed 2014 AGOA Forum events. To address both objectives, we interviewed officials from the Departments of Agriculture, Commerce, Labor, State, and the Treasury; the U.S. Agency for International Development; and the Office of the U.S. Trade Representative (USTR), all of which are the members of the Trade Policy Staff Committee’s AGOA Implementation Subcommittee that generally prepare the sub-Saharan Africa country reports for the annual eligibility review. To examine the relationship between AGOA eligibility and economic development in sub-Saharan Africa, we analyzed data on gross domestic product (GDP) per capita and total population from the World Bank World Development Indicators. We used data from the April 2014 version of the World Development Indicators. We compared population-weighted average GDP per capita at the end of 2012 for AGOA-eligible countries versus ineligible countries, as well as for the top three AGOA-eligible petroleum exporting countries versus other AGOA-eligible countries. We also compared average annual growth rates in annual income per person from 2000 to 2012 for AGOA-eligible versus ineligible countries, as well as for the top three AGOA-eligible petroleum exporting countries versus other AGOA-eligible countries. To study sub-Saharan African countries’ exports under AGOA as a share of total exports as well as the value of petroleum exports to the United States under AGOA, we used U.S. Census trade data on imports by trading partners and imports by product from 2013. We used data on countries’ total exports from the International Monetary Fund’s Direction of Trade Statistics and International Financial Statistics databases. We calculated AGOA-eligible countries’ shares of AGOA and Generalized System of Preference (GSP) exports in their overall exports to study how the value of exports under these trade preference systems compared with the value of overall exports for AGOA- eligible countries in 2013. To study differences in the quality of governance institutions between AGOA-eligible and ineligible countries, we analyzed data on governance from the World Bank Worldwide Governance Indicators, comparing average scores for Political Stability and Rule of Law in 2000 and 2012 between AGOA-eligible and ineligible countries. To describe the differences in the amount of foreign development assistance and foreign direct investment received by AGOA-eligible and ineligible countries, we used data on country programmable aid from the Organisation for Economic Co-operation and Development (OECD) and foreign direct investment (FDI) as a percentage of GDP from the World Development Indicators. We compared yearly averages of aid per capita (from 2000 to 2012) and net FDI inflows as a percentage of GDP (from 2001 to 2013) between AGOA- eligible and ineligible countries. To assess the reliability of these data, we reviewed publicly available documents on these databases and conducted electronic testing for missing values and outliers. We determined that the data were sufficiently reliable for our purposes. We also reviewed a judgmental sample of peer-reviewed academic literature related to economic development, foreign direct investment, foreign aid, and the impact of trade preference programs. Country classifications. For most of the analysis, we defined AGOA- eligible countries as the 40 SSA countries that were deemed eligible for AGOA benefits as of the end of 2012. Nine SSA countries were ineligible for AGOA benefits as of the end of 2012. We chose 2012 as the base year for this classification because it was the latest year for which data on GDP per capita were available for the SSA countries in the April 2014 version of the World Bank World Development Indicators. The only exception is that for the analysis of the exports under AGOA as a share of total exports from AGOA-eligible countries, we defined AGOA-eligible countries as the 39 countries that were deemed eligible for AGOA benefits as of 2013 because we analyzed 2013 trade statistics. GDP per capita. To study differences and depict trends in income per person between selected groupings of countries, we used the World Development Indicators annual GDP per capita series, expressed in year 2005 U.S. dollars. Thirty-seven out of 40 countries eligible for AGOA benefits in 2012, and 8 out of 9 ineligible countries, reported complete GDP per capita data from 2000 and through 2012. Djibouti, São Tomé and Principe, and South Sudan were excluded from the AGOA-eligible group because of missing data. Somalia was excluded from the ineligible group due to missing data. Within each country grouping, we took the weighted average of countries’ GDP per capita, where the weights are given by the share of a country’s population in the overall group’s population. The weighted average GDP per capita is a measure of the yearly income of the average individual in the country group. Equation (1) shows that the weighted average GDP per capita is equivalent to summing up the GDP of every country in the group and dividing by the total group population: Where 𝑛 denotes the number of countries in the group, 𝑦𝑖 refers to gross domestic product of country i, and 𝐿𝑖 refers to the population of country i. (1) AGOA export share. To examine the magnitude of AGOA exports relative to the total exports of each AGOA-eligible country, we used U.S. Census data on imports by trading partners. We calculated the value of imports under AGOA (i.e., imports that received duty-free access claiming AGOA preference benefits) and imports that received duty-free access under GSP. Since AGOA was established as a program for SSA countries that builds on GSP, we analyzed exports from AGOA-eligible countries to the United States under both programs together. AGOA countries continue to have duty-free access to the commodities covered under the GSP although that program expired in 2013. We computed the AGOA (including GSP) share of exports relative to total exports for each AGOA-eligible country in 2013, and graphically tabulated countries according to their AGOA export share. In this analysis, both the exports data and the definition of AGOA eligibility are from 2013. We used data from two International Monetary Fund databases, Direction of Trade and International Financial Statistics, to determine total exports for each country. Top three AGOA-eligible petroleum exporters. The top three AGOA- eligible petroleum exporters were Nigeria, Angola, and Chad, which collectively accounted for 90 percent of all petroleum exports to the U.S. under AGOA in 2013, based on U.S. Census data on AGOA imports by product. Since AGOA was established as a program for SSA countries that builds on GSP, the 90 percent statistic refers to exports of petroleum from AGOA-eligible countries to the United States under both programs together. AGOA countries continue to have duty-free access to the commodities covered under the GSP although that program expired in 2013. AGOA-eligible countries minus the top petroleum exporters refer to the remaining 34 AGOA-eligible countries. Governance. To examine differences in the quality of governance (also known as “institutions”) between AGOA-eligible and ineligible countries, we reviewed a judgmental sample of empirical academic literature that provided evidence that property rights and political stability can promote economic growth. We judgmentally identified two measures of institutional quality from the Worldwide Governance Indicators that may capture aspects of the security of private property, namely scores for the rule of law and political stability. We compared the simple average of scores in 2000 and 2012 for AGOA-eligible countries versus ineligible countries. We rescaled the indicators to range from 0 to 5, with higher scores indicating better perceptions of governance. Aid and foreign direct investment. To examine differences in the amount of development assistance received by AGOA-eligible versus ineligible countries, we used annual data from the OECD on country programmable aid. According to the OECD, country programmable aid (CPA) is the proportion of aid that is subjected to multiyear programming at the country level, and hence represents a subset of official development assistance (ODA) flows. CPA is equivalent to gross ODA disbursements by recipient but excludes spending that is (1) inherently unpredictable (humanitarian aid and debt relief); or (2) entails no flows to the recipient country (administration costs, student costs, development awareness and research, and refugee spending in donor countries); or (3) is usually not discussed between the main donor agency and recipient governments (food aid, aid from local governments, core funding to nongovernmental organizations, aid through secondary agencies, ODA equity investments, and aid that is not allocable by country). CPA counts loan repayments among the aid transferred from donor countries to developing countries. We represented country programmable aid in per person units by dividing the program aid total by the total population of the country. Data on population were from the World Development Indicators. We computed the simple average of aid per person in each year from 2005 to 2012 for AGOA-eligible countries and AGOA ineligible countries. To examine differences in the amount of foreign direct investment received by AGOA-eligible versus ineligible countries, we used annual data on net inflows of foreign direct investment as a percentage of GDP from the World Bank World Development Indicators. We computed the simple average of these series in each year from 2001 to 2013 for AGOA- eligible countries and AGOA ineligible countries. In using the FDI data, we checked for outliers and missing values and identified Equatorial Guinea as an outlier based on comparisons with data from other sources; values for Equatorial Guinea’s net FDI inflows as a percentage of GDP were omitted from the calculation of the average. We conducted this performance audit from April 2014 to February 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Central African Republic (eligible Oct. 2, 2000; ineligible Jan. 1, 2004) Congo, Democratic Republic of (eligible Oct. 31, 2003; ineligible Jan. 1, 2011) Congo, Republic of (eligible Oct. 2, 2000) Côte d’Ivoire (eligible May 16, 2002; ineligible Jan. 1, 2005; eligibility regained Oct. 25, 2011) Gambia, The (eligible Mar. 28, 2003; Ineligible Jan. 1, 2015) Guinea (eligible Oct. 2, 2000; ineligible Jan. 1, 2010; eligibility regained Oct. 25, 2011) Guinea-Bissau (eligible Oct. 2, 2000; ineligible Jan. 1, 2013; Eligible Dec. 23, 2014) Madagascar (eligible Oct. 2, 2000; ineligible Jan. 1, 2010; eligibility regained June 26, 2014) Mali (eligible Oct. 2, 2000; ineligible Jan. 1, 2013; eligibility regained Dec. 23, 2013) Inflation-adjusted U.S. dollars (2005 base year) In addition to the person named above, Christine Broderick (Assistant Director), Ming Chen (Assistant Director), Rhonda M. Horried (Analyst-in- Charge), Michael Hoffman, John O’Trakoun, Qahira El’Amin, Giselle Cubillos-Moraga, Thomas Hitz, David Dayton, Oziel A. Trevino, Jill Lacey, and Ernie Jackson made significant contributions to this report. African Growth and Opportunity Act: USAID Could Enhance Utilization by Working with More Countries to Develop Export Strategies. GAO-15-218 Washington, D.C.: January. 22, 2015. Foreign Assistance: USAID Should Update Its Trade Capacity Building Strategy. GAO-14-602. Washington, D.C.: September 10, 2014. African Growth and Opportunity Act: Observations on Competitiveness and Diversification of U.S. Imports from Beneficiary Countries. GAO-14-722R. Washington, D.C.: July 21, 2014. Sub-Saharan Africa: Trends in U.S. and Chinese Economic Engagement. GAO-13-199. Washington, D.C.: February 7, 2013. Foreign Assistance: The United States Provides Wide-ranging Trade Capacity Building Assistance, but Better Reporting and Evaluation Are Needed. GAO-11-727. Washington, D.C.: July 29, 2011. U.S.-Africa Trade: Options for Congressional Consideration to Improve Textile and Apparel Sector Competitiveness under the African Growth and Opportunity Act. GAO-09-916. Washington, D.C.: August 12, 2009. International Trade: U.S. Trade Preference Programs: An Overview of Use by Beneficiaries and U.S. Administrative Reviews. GAO-07-1209. Washington, D.C.: September 27, 2007. Foreign Assistance: U.S. Trade Capacity Building Extensive, but Its Effectiveness Has Yet to Be Evaluated. GAO-05-150. Washington, D.C.: February 11, 2005.
Enacted in 2000 and set to expire in September 2015, AGOA is a trade preference program that seeks to promote economic development in 49 sub-Saharan African countries by allowing eligible countries to export qualifying goods to the United States without import duties. The act requires the U.S. government to conduct an annual eligibility review to assess each country's progress on economic, political, and development reform objectives in order to be eligible for AGOA benefits. AGOA also requires an annual forum to foster closer economic ties between the United States and sub-Saharan African countries. GAO was asked to review various issues related to AGOA's economic development benefits. In this report, GAO examines (1) how the AGOA eligibility review process has considered economic, political, and development reform objectives described in the act and (2) how sub-Saharan African countries have fared in certain economic development outcomes since the enactment of AGOA. GAO reviewed documents and interviewed officials from U.S. agencies to examine the relationship between the U.S. government's review process and AGOA reform criteria. GAO analyzed trends in economic development indicators for AGOA eligible and ineligible countries from 2001 to 2012, the latest year for which data were available for most countries. The U.S. government uses the annual eligibility review process required by the African Growth and Opportunity Act (AGOA) to engage with sub-Saharan African countries on their progress toward economic, political, and development reform objectives reflected in AGOA's eligibility criteria. Managed by the Office of the United States Trade Representative, the review process brings together officials from U.S. agencies each year to discuss the progress each country is making with regard to AGOA's eligibility criteria and to reach consensus as to which countries should be deemed eligible to receive AGOA benefits. Over the lifetime of AGOA, 13 countries have lost AGOA eligibility, although 7 eventually had it restored (see figure). To encourage reforms, the U.S. government will engage with countries experiencing difficulty meeting eligibility criteria and may specify measures a country can take. For example, U.S. officials met with Swaziland officials over several years to discuss steps to improve labor rights. However, Swaziland did not make the necessary reforms and lost eligibility effective in January 2015. GAO analyzed data on economic development indicators for sub-Saharan African countries that were eligible and ineligible for AGOA in 2012; the results showed that eligible countries fared better than ineligible countries on some economic measures since the enactment of AGOA. The extent to which this outcome is attributable to AGOA, however, is difficult to isolate after additional factors are taken into consideration. Other factors, such as the small share of AGOA exports in the overall exports of many AGOA-eligible countries, the role of petroleum exports in recent income growth, the quality of government institutions, and differences in levels of foreign aid and investment, make it difficult to isolate AGOA's contribution to overall economic development. For example, AGOA exports are a small share of overall exports for the majority of AGOA-eligible countries. GAO found evidence that increasing energy prices may also have contributed to income growth within AGOA-eligible petroleum-exporting countries. GAO also found that AGOA-eligible countries on average had higher governance scores and received more foreign aid and investment compared with ineligible countries. These differences may have contributed to economic development in AGOA-eligible countries, but they may also have been facilitated by AGOA, a possibility that makes it difficult to isolate AGOA's impact on economic development. GAO is not making any recommendations.
Over the last decade, nursing home ownership and operating structures have continued to evolve, including an increase in private investment ownership of nursing homes and the development of more complex structures. Nursing home ownership varies in terms of profit status, level of management involvement, number of homes owned, and whether the real estate of homes is owned or leased.  Profit status. Owners may be for-profit, nonprofit, or government entities; about two-thirds of nursing homes are for-profit businesses. In general, for-profit businesses, which may be publicly traded or privately owned, have a goal of making profits that are distributed among the owners and stockholders. In contrast, a nonprofit entity receives favorable tax status because it may not operate for the benefit of nor distribute revenues to private interests.  Management involvement. Nursing home owners vary in terms of their involvement in management of the business: they may be the operators, and hold the state license, or they may contract with separate licensed entities to manage the day-to-day operations.  Number of homes owned. Owners or operators may have only one facility or they may have multiple facilities across one or more states that are part of a chain. Owners or operators may also have multiple chains. According to a study conducted for the Department of Health and Human Services, about half of nursing homes are part of a chain.  Real estate. Owners or operators do not necessarily own the real estate where care is delivered, but instead may lease it. The separation of real estate assets from the operations may be done to obtain financing or in an attempt to protect real estate assets from malpractice claims. Furthermore, the owners, leaseholders, and operators may or may not be owned by the same or related entities. PI firm nursing home ownership. In general, PI firms use a combination of investment capital and debt financing to acquire companies, including nursing home companies, with a goal of making a profit and eventually returning that profit to investors and the firm. As we noted in our prior report, some of the 10 PI firms we studied acquired both the operations and the real estate of nursing home chains while others only acquired the real estate. The former firms sit on the chains’ boards of directors and told us that their role is to provide strategic direction rather than directing day-to-day operations. In contrast, PI firms we studied that only purchased real estate do not sit on the nursing home chains’ boards of directors. Among the PI firms that shared their reasons for investing in the nursing home industry, most cited the increased demand for long- term care due to an aging population. We also reported that the investment time horizons and objectives of PI firms vary. Some PI firms purchased the homes with a planned short-term “exit strategy” and others intended to hold the investment over the long term. PI firm managers said they are able to make business improvements that their publicly traded competitors may be less willing to make because they generally are not subject to periodic disclosure requirements about their financial performance and therefore are not tied to producing profits on a quarterly basis. In addition, PI firms have said that they increase the operator’s access to funding that can be used to increase staff wages, enhance operations, or modernize facilities and which ultimately may result in improved quality of care. PI firm business strategies. PI firms may pursue different business strategies with respect to the types of residents they want to attract and the efficiency of their operations. Researchers have found that some nursing homes may specialize in caring for residents with certain care needs or Medicare residents. Care for such residents may result in higher levels of reimbursement. Indeed, prior to and after acquisition, PI homes we studied had a higher average percentage of residents whose care was reimbursed by Medicare compared to other for-profit and nonprofit homes. After acquisition, the percentage of residents in PI homes whose care was paid for by a source other than Medicare or Medicaid was higher on average than in other for-profit homes, but lower than in nonprofit nursing homes. Prior to acquisition, the average occupancy rates in PI homes were not significantly different from other homes. However, after acquisition in 2009, the average occupancy rates in PI homes were higher than other for-profit homes, although they did not differ significantly from nonprofit homes’ occupancy rates. The Social Security Act requires all nursing homes that participate in Medicare and Medicaid to undergo periodic assessments of compliance with federal quality standards. It also includes certain ownership reporting requirements. Under contract with CMS, state survey agencies conduct standard surveys, which occur once a year, on average, and complaint investigations as needed. A standard survey involves a comprehensive assessment of about 200 federal quality standards. In contrast, complaint investigations generally focus on a specific allegation regarding resident care or safety made by a resident, family member, or nursing home staff member. Deficiencies identified during either standard surveys or complaint investigations are classified in 1 of 12 categories according to their scope (i.e., the number of residents potentially or actually affected) and severity (i.e., the potential for or occurrence of harm to residents). Serious deficiencies indicate care problems that have resulted in actual harm or immediate jeopardy (actual or potential for death or serious injury) for one or more residents. We, CMS, and other researchers have examined the rates of deficiency citations, by state and among groups of nursing homes, to track trends in the proportion of homes with serious deficiencies and better understand recurring care problems. Our prior reports identified considerable interstate variation in citations for serious deficiencies on standard surveys and the understatement of serious deficiencies on those surveys. Although several studies have shown that for-profit nursing homes generally have a greater number of total deficiency citations than nonprofit homes, others have found no statistical difference in total deficiency citations between for-profit and nonprofit homes. Similarly, research that examined differences in the citations for serious deficiencies has not consistently found a difference between for-profit and nonprofit homes. One study examined the effect of PI acquisition on total and serious deficiencies; it did not find a significant difference from before to after PI acquisition. A different study that examined the impact of ownership of nursing home operations and real estate found that deficiency rates were similar across homes regardless of whether or not ownership was split between different entities. Nursing homes employ three types of nursing staff—RNs, LPNs, and CNAs. The responsibilities and salaries of these three types of staff are related to their level of education. The staffing mix—that is, the balance a nursing home maintains among RNs, LPNs, and CNAs—is generally related to the needs of the residents served. For example, a higher proportion of RNs may be employed to meet residents’ needs in homes that serve greater numbers of residents with acute care needs or those with specialty care units (such as units for residents who require ventilators). However, homes may not be able to pursue their ideal staffing mix because of RN shortages in certain geographic areas. High turnover among licensed nurses and CNAs may also affect staffing mix. Licensed Nurses and Nurse Aides  RNs have at least a 2-year degree and are licensed in a state. Due to their advanced training and ability to provide skilled nursing care, RNs are paid more than other nursing staff. Generally, RNs are responsible for managing residents’ nursing care and performing complex procedures, such as starting intravenous feeding or fluids.  LPNs have a 1-year degree, are also licensed by the state, and typically provide routine bedside care, such as taking vital signs.  CNAs are nurse aides or orderlies who work under the direction of licensed nurses, have at least 75 hours of training, and have passed a competency exam. CNAs’ responsibilities usually include assisting residents with eating, dressing, bathing, and toileting. In a typical nursing home, CNAs have more contact with residents than other nursing staff and provide the greatest number of hours of care per resident per day. CNAs generally are paid less than RNs and LPNs. Researchers have found that higher total and RN staffing levels are typically associated with higher quality of care as shown by a wide range of indicators, including deficiencies and health outcomes. Lower total nurse staffing levels and lower levels of RN staffing have been linked to higher rates of deficiency citations. In addition, higher total nurse staffing ratios (hours per resident per day), and higher levels of RN staffing in particular, have been associated with better health outcomes (such as fewer cases of pressure ulcers, urinary tract infections, malnutrition, and dehydration) as well as improved residents’ functional status. A home’s management of its nurse staffing has the potential to affect the quality of resident care, as well. For example, nursing staff turnover complicates nursing homes’ efforts to train their staff and can contribute to quality problems. There are no federal minimum standards linking nurse staffing to the number of residents but a number of states have such standards. By statute, nursing homes that participate in Medicare and Medicaid are required to have sufficient nursing staff to provide nursing and related services to allow each resident to attain or maintain the highest practicable physical, mental, and psychosocial well-being. In addition to this general requirement, every nursing home must have 24 hours of licensed nurse (RN or LPN) coverage per day, including one RN on duty for at least 8 consecutive hours per day, 7 days per week. In contrast, one researcher reported that, as of 2010, 34 states had established minimum requirements for the number of nurse aide or direct care hours, which ranged from about 0.4 to 3.5 hours per resident per day. In 2000, CMS examined the impact of nurse staffing on quality of care in nursing homes. CMS concluded that a minimum nurse staffing ratio of 2.75 hours per resident day was needed to maintain quality of care, while also noting a preferred ratio of 3 hours and an optimal ratio of 3.9 hours. For RNs, CMS concluded that the minimum ratio should be 0.2 hours, with a preferred ratio of 0.45 hours. The average acuity of nursing home residents has increased since that report was issued. CMS did not recommend establishing minimum federal nurse-staffing standards, in part because staffing needs vary with residents’ care needs and management or nursing practices (such as training or policies affecting the retention of nursing staff) can influence the quality of care. Studies of trends in nurse staffing in the last few years have noted an increase in total nurse staffing and in licensed nurse staffing. In addition, several studies have shown that for-profit nursing homes generally have lower nurse staffing ratios, and lower RN ratios, than nonprofit homes. One study examined the effect of PI ownership on nurse staffing; it found that RN staffing declined after PI acquisition, but this decline had begun prior to acquisition. This study also found an increase in CNA staffing after PI acquisition. A different study that examined the impact of ownership of nursing home operations and real estate on nurse staffing found that RN staffing was higher when real estate was owned than when it was leased or when ownership arrangements were mixed. Nursing home costs are determined by the mix of residents and the management of a home’s resources to meet its residents’ needs. The costs of caring for any particular nursing home resident vary with the type of services and amount of care needed. Residents who require low- intensity nursing and therapy or custodial care, like the typical Medicaid resident, are less costly, in part because their care needs are not as heavily dependent on the services of licensed nurses. Medicare beneficiaries are typically more costly than Medicaid residents, have shorter stays, and are admitted with the expectation that they will rehabilitate, recover, and return to their residences. A growing share of nursing home residents requires rehabilitation therapies and intensive skilled nursing care, such as parenteral feeding and ventilator care that previously were provided primarily in hospital settings; these residents are more costly because they require more skilled nursing and therapy staff and specialized equipment. Salaries and labor-related costs for nursing and other staff account for more than half of a nursing home’s operating costs. Therefore a home’s decisions about its staffing mix are a key determinant of the home’s costs. To a lesser extent, the nursing home’s management of its capital assets—buildings, land, and equipment—also influences the home’s costs. New nursing homes and those that have been recently renovated may have additional expenses associated with facility construction and renovation that older buildings do not. In addition to a home’s occupancy rate, profitability is influenced by several other factors, including payment rates, the mix of residents, and the nursing homes’ management of resources. Medicare’s and 21 states’ Medicaid payment rates are prospectively set per diem amounts that take into account the relative care needs of the resident. Under such payment systems, nursing homes have an incentive to provide care at a cost below the payment amount because they can retain any excess revenue not spent providing care. Although Medicare generally pays for the care of the nursing home residents with the most complex care needs, Medicare and private insurance have the highest payment rates for nursing home care and, on average, reimburse homes more than the costs of care. On the other hand, industry representatives perennially express concerns that Medicaid payment rates in many states are so low that they do not cover the costs of providing care. Some nursing homes trying to increase their profitability may focus on reducing their costs, by providing fewer or less expensive services. Other homes trying to increase their profitability may staff their homes and renovate their buildings to attract the better-paying Medicare and private insurance residents that will enhance their revenues or profits. We and the Medicare Payment Advisory Commission have reported that for-profit nursing homes have a greater profit on their Medicare line of business than nonprofit homes, on average. The relationship between costs, profitability, and quality of care in nursing homes differs depending on how the home’s resources are deployed. A home that increases its nurse staffing or adopts a new technology to improve the quality of care may also reduce its profitability because it increased costs without increasing revenues. However, some expenditures may prevent additional costs or increase revenues and therefore lead to improved profitability. For example, an expense can prevent subsequent, costly care needs, such as when higher levels of RN staffing result in reduced levels of infections. As another example, expenses that boost the attractiveness of the home to better paying residents may also improve the home’s profitability, whether or not such expenses improve the quality of care. PI homes, like other for-profit homes, had more total deficiencies than nonprofit homes in both 2003 and 2009. In 2009, PI homes did not differ significantly from nonprofit homes in the likelihood of a serious deficiency, but in 2003 the likelihood was higher in homes that were subsequently acquired by PI than in nonprofit homes. From 2003 to 2009, total deficiencies increased and the likelihood of a serious deficiency decreased in PI homes; the changes in these deficiency measures from 2003 to 2009 in other for-profit and nonprofit homes did not differ significantly from the changes in PI homes. On average, PI homes had more total deficiencies than nonprofit homes in both 2003 and 2009. (See fig. 1.) PI homes did not differ significantly from other for-profit homes in total deficiencies in either year. Total deficiencies in PI homes increased from 2003 to 2009; this change was not significantly different from the change in other homes. Among PI homes, total deficiencies did not differ significantly as a function of whether the same firm acquired the operations and real estate or not. Our examination of total deficiencies in each of five PI firms’ homes indicated some differences between PI firms, but the differences we observed generally existed prior to acquisition and persisted after acquisition. For example, in comparison to other homes acquired by PI firms, total deficiencies were lower in both 2003 and 2009 in homes of one firm and were greater in both years in homes of a second firm. In 2009, PI homes did not differ significantly from nonprofit homes in the likelihood of a serious deficiency when we controlled for other explanatory factors, even though PI homes were more likely than nonprofit homes to have had a serious deficiency in 2003. (See fig. 2.) The likelihood of a serious deficiency in other for-profit homes was not significantly different from PI homes in either year. The likelihood of a serious deficiency decreased from 2003 to 2009 in PI homes, and this change was not significantly different from the change in other for-profit and nonprofit homes. In addition, the likelihood that a PI home would have had a serious deficiency in 2009 did not differ significantly as a function of whether the same firm owned both the operations and real estate or not, although in 2003, the likelihood was significantly lower in homes for which the same PI firm acquired both operations and real estate. Our examination of serious deficiencies in each of five PI firms’ homes indicated some differences between PI firms, but these differences existed prior to acquisition and persisted after acquisition. In comparison to other homes acquired by PI firms, the likelihood was lower in both 2003 and 2009 in homes of one firm and was greater in both years in homes of a second firm. On average, total reported nurse staffing ratios (hours per resident per day) were lower for PI homes than for other types of homes in both 2003 and 2009, but PI homes’ reported RN ratios—the most skilled component of total nurse staffing—increased more from 2003 to 2009. On average, reported ratios for LPNs—the other type of licensed nurse—also increased from 2003 to 2009 in PI homes; this change was not significantly different from the change from 2003 to 2009 in other for-profit and nonprofit homes. In contrast, reported CNA ratios for PI homes did not change significantly from 2003 to 2009, but increased for other types of homes. In both 2003 and 2009, PI homes reported lower average total nurse staffing ratios than other types of homes. (See fig. 3.) Average reported total nurse staffing ratios for PI homes increased from 2003 to 2009; this change was not significantly different from either other for-profit or nonprofit homes. The unadjusted average total nurse staffing ratios reported in 2009 for each ownership type exceeded the ratio identified as “preferred” by CMS in its 2000 report, but fell short of the level CMS identified as “optimal.” Our examination of reported average total nurse staffing ratios for each of five PI firms indicated some differences between firms. We found that the change in these ratios from 2003 to 2009 in one PI firm’s homes was not as great as the increase for other PI-acquired homes; in 2009, total nurse staffing ratios for that firm’s homes were lower than for other PI-acquired homes. Representatives of the nursing home operator for homes of this PI firm told us that they had focused on and reduced staff turnover since 2003. The staffing mix in PI homes—the balance of RNs, LPNs, and CNAs— changed from 2003 to 2009, and the changes in staffing were different in PI homes than in other types of homes. Average reported ratios for RNs (one type of licensed nursing staff) increased more from 2003 to 2009 in PI homes than other types of homes. Average ratios for LPNs (the other type of licensed nursing staff) also increased in PI homes from 2003 to 2009, but the change in PI homes did not differ significantly from the change in other for-profit and nonprofit homes. In contrast, average reported ratios for CNAs (who are not licensed) did not change significantly from 2003 to 2009 for PI homes, but increased for both other types of homes. RN ratios. In 2009, average reported RN ratios for PI homes were greater than other for-profit homes and were also greater than nonprofit homes, when we controlled for other explanatory factors. (See fig. 4.) Average reported RN ratios for PI homes increased from 2003 to 2009, and this increase was greater than the change for both other types of homes. In 2003, average reported RN ratios for PI homes did not differ significantly from other for-profit homes when we controlled for other explanatory factors and were lower than for nonprofit homes. These ratios were greater for nonprofit homes than for other for-profit homes in both 2003 and 2009. The unadjusted average RN ratios reported in 2009 for each ownership type—PI, other for-profit, and nonprofit homes—fell short of the ratios identified as “preferred” by CMS in its 2000 report. In 2009, average reported RN ratios were higher if the same PI firm acquired both operations and real estate than if not. The increase in these ratios from 2003 to 2009 for PI homes was greater if the same PI firm acquired both operations and real estate than if not. (See fig. 5.) In 2003, average reported RN ratios did not differ significantly as a function of whether the same PI firm acquired both operations and real estate or not when we controlled for other explanatory factors. Our examination of RN ratios for five PI firms’ homes indicated some differences between firms. We found that the increase from 2003 to 2009 was greater for homes of two firms than for other homes acquired by PI. Representatives of the owners and operators of these homes told us that these homes generally had high levels of RN staff before acquisition either because they served a large proportion of short-term residents with high acuity or rehabilitation needs in one case, or because they treated residents in specialized care units (such as ventilator units). Representatives of each firm also said that increasing RN staff was part of an ongoing strategy to expand their capacity to care for such residents. For homes of the third PI firm, the change from 2003 to 2009 in RN ratios was not as great as the increase for other PI homes. This firm’s representatives told us that training can be more important than the number of staff and so they have focused their efforts on training and reducing staff turnover. The change in average reported RN ratios from 2003 to 2009 for two sets of homes for which different PI firms acquired the operations and real estate was less than the increase for other PI homes. The operator of one of these sets of homes told us that they had focused on promoting stable nursing leadership. LPN ratios. Average reported LPN ratios were lower for PI homes than other homes in both 2003 and 2009 when we controlled for other explanatory factors. For PI homes, these ratios increased from 2003 to 2009; this increase was not significantly different than the change for either other type of homes. Among PI homes, LPN ratios did not differ significantly as a function of whether the same firm acquired the operations and real estate or not. CNA ratios. Average reported CNA ratios were lower for PI homes than other homes in both 2003 and 2009. (See fig. 6.) Average reported CNA ratios for PI homes did not change significantly from 2003 to 2009, but increased for both other types of homes. Among PI homes, CNA ratios did not differ significantly as a function of whether the same firm acquired the operations and real estate or not when we controlled for other explanatory factors. Our examination of the CNA ratios for five PI firms’ homes indicated some differences between firms. In comparison to other homes acquired by PI firms, we found that for one set of homes where different PI firms acquired the operations and real estate these ratios were lower in 2009, but did not differ significantly in 2003. For another set of homes where different PI firms acquired the operations and real estate, these ratios were higher in 2009, but did not differ significantly in 2003. Representatives of the operator for the nursing homes with lower CNA ratios in 2009 told us that they had acquired labor-saving technology and focused on reducing turnover. They reported that turnover of nursing staff that provide direct care to residents in their homes had been 90 percent in 2003, but was 59 percent in 2009. The financial performance of PI homes showed both cost increases and higher margins when compared to other for-profit or nonprofit homes. Specifically, facility costs per resident day for PI homes increased more, on average, from before acquisition (2003) to after acquisition (2008) than other for-profit and nonprofit homes. Among PI-acquired homes, we observed less of an increase if the same PI firm owned the operations and real estate than if not. The results were similar when we examined capital-related costs, a component of facility costs. Despite increased costs, PI homes also showed increased facility margins but the increase was not significantly different from the change in other for-profit homes. In contrast to PI and other for-profit homes, the margins of nonprofit homes decreased. Both facility costs per resident day and a component of those costs— capital related costs per resident day—increased in PI homes from 2003 to 2008 and this increase was greater than for other for-profit and nonprofit homes. Facility costs. In both 2003 and 2008, PI homes reported lower facility costs per resident day, on average, than nonprofit homes even though these costs increased more in PI homes from 2003 to 2008 than in both nonprofit homes and other for-profit homes. (See fig. 7.) Facility costs include all costs associated with maintaining and operating a nursing home, such as staff salaries, administrative costs, and capital-related costs. While PI homes did not differ significantly from other for-profit homes in 2003 when we controlled for other explanatory factors, they reported higher costs in 2008. The increase in facility costs per resident day from 2003 to 2008 was less, on average, if the same PI firm acquired both the operations and real estate than if it did not. (See fig. 8.) While the latter group of homes reported lower costs in 2003, these two groups reported costs in 2008 that did not differ significantly after we controlled for other explanatory factors. Our examination of facility costs for each of five PI firms indicated some differences among firms. In comparison to other homes acquired by PI, the increase in facility costs from 2003 to 2008 was greater in one set of homes where different PI firms owned the operations and real estate but the change was not as great in another PI firm’s homes. Capital-related costs. Average capital-related costs per resident day in PI homes increased from 2003 to 2008 and this change was greater for PI homes than for other types of homes. (See fig. 9.) Capital-related costs are a component of total facility costs that capture mortgage payments, rents, depreciation, taxes and insurance, as well as land and building improvements, including upgrades to equipment. Although capital- related costs were lower in PI homes than in other for-profit and nonprofit homes in 2003 when we controlled for other explanatory factors, they were higher than both other types of homes in 2008. The average increase in capital-related costs from 2003 to 2008 was less if the same PI firm acquired both operations and real estate than if not. (See fig. 10.) Additionally, capital-related costs were lower in both years if the same PI firm acquired both the operations and real estate than if not, when we controlled for other explanatory factors. Our examination of capital-related costs for each of five PI firms’ homes indicated some differences between firms. Two PI firms’ homes showed increases that were greater than other homes acquired by PI firms: (1) one of these sets of homes, for which different PI firms acquired the operations and real estate, reported lower capital-related costs in 2003 than other PI homes, but higher costs in 2008 and (2) the other firm’s homes reported higher capital-related costs than other PI homes in both 2003 and 2008. A representative of the latter PI firm told us that they had secured a $100 million line of credit for the modernization of the firm’s nursing homes. Investment in the homes had been ongoing prior to acquisition, this representative said, but the homes’ access to capital had increased after acquisition. In contrast, the change in capital-related costs for the remaining three firms’ homes was not as great as the increase in other PI homes. Two of these three firms’ homes reported lower capital- related costs in both 2003 and 2008. Representatives from a nursing home chain owned by one of these firms commented that the majority of investments were in staffing. They noted that, in contrast, their peers had invested in their own facilities to attract the highest paying residents. Representatives from another firm that owned nursing home real estate, but not operations commented that, depending on the resident population served and the location of the home, renovations aimed at attracting more acute (and higher paying) residents may not pay off. For example, homes in a rural area might not be able to attract the appropriate staff and mix of residents to make renovations aimed at treating more acute-care residents worth the costs. However, they told us that these older, rural homes still effectively serve a segment of the market despite the lower level of capital investment. Facility margins for PI homes were, on average, higher in 2003 and 2008 than for other for-profit and nonprofit homes. (See fig. 11.) Facility margins in PI homes increased from 2003 to 2008; this increase was not significantly different from the average change for other for-profit homes, but was greater than the change in margins for nonprofit homes. In fact, facility margins for nonprofit homes decreased from 2003 to 2008. The increase in facility margins among PI homes from 2003 to 2008 was not significantly different, on average, if the same PI firm acquired both the homes’ operations and the real estate than if it did not. However, facility margins for the former were, on average, higher both in 2003 and 2008. Our examination of facility margins for each of five PI firms’ homes indicated some differences between firms. We found that two firms’ homes showed an increase in facility margins that was greater than other homes acquired by PI we studied. Representatives of one of these firms told us that increased margins were the result of increased spending in the homes with a focus on investments in technology, staffing, and treating higher acuity residents. They told us that the strategy of the nursing home chain they acquired had not changed and that both increased spending and margins were present before the acquisition. Two firm’s homes showed a change in facility margins that was less than other PI homes. Representatives for the nursing home chain operating one of these two sets of homes commented that they had not been focused on the margins; the chain’s chief executive officer noted that he was evaluated by its PI owner based on the quality of care provided, not margins. The acquisition of nursing homes by private investment firms has raised questions about the potential effects on the quality of care. Our analyses did not find an increase in the likelihood of serious deficiencies or a decrease in average reported total nurse staffing for the PI-acquired homes we studied. In fact, reported RN staffing increased more in PI- acquired homes than other homes. However, the performance of these PI homes was mixed with respect to the other quality variables we examined. For example, PI-acquired homes had more total deficiencies and lower total nurse staffing ratios than nonprofit homes, both before and after acquisition. Also, despite concerns that PI firms might cut costs to improve profitability, we found that reported facility costs increased in the PI-acquired homes we studied. Margins also increased in the PI- acquired homes we studied from before to after acquisition, while they decreased in nonprofit homes. It is possible to increase both costs and margins because certain expenditures may prevent subsequent, costly care, or increase a home’s attractiveness to better paying residents. PI- acquired homes were more similar to for-profit than to nonprofit homes with respect to the change in margins and total deficiencies, but were like neither for-profit nor nonprofit homes with respect to the change in staffing mix and capital-related costs. In addition, compared to homes for which the same PI firm acquired both operations and real estate, PI- acquired homes for which ownership was split had lower reported RN ratios, higher reported capital-related costs, and lower reported facility margins in the period after acquisition. Our findings were consistent with the fact that PI firms we studied are to varying degrees attempting to increase the attractiveness of their homes to higher paying residents, including those whose care is reimbursed by Medicare. The homes acquired by the PI firms we studied had a higher average proportion of Medicare residents both before and after acquisition. Our analyses and interviews with PI firm officials revealed differences in their management approaches. For example:  Officials at two PI firms noted that they were continuing the existing strategy of the homes they acquired by expanding the capacity to care for residents with high acuity or specialized needs. Consistent with their strategies, both firms’ homes reported a greater increase in RN staffing from 2003 to 2009 than other PI-acquired homes. One of these firms indicated that facility modernization, which was associated with its strategy, had continued since acquisition and in fact access to capital for such improvements had increased after acquisition. Both firms’ homes showed an increase in facility margins that was greater than the other PI homes we studied.  Officials at a third PI firm stated that training can be more important than the number of staff and so focused on training and reducing staff turnover. They also stated that they did not focus on facility improvements to the same degree as other PI firms. The increase in facility margins for this firm’s homes was less than for other PI firms. We also found that the likelihood of a serious deficiency for this firm’s homes was lower than for other PI firms’ homes in both 2003 and 2009. We provided a draft of this report to the Department of Health and Human Services (HHS) for comment and also invited the PI firms from which we obtained information for this report to review the draft. In its written comments, HHS provided CMS’s observations on our methodology. HHS’s comments are reproduced in appendix II. CMS suggested an alternative to our “before and after” acquisition methodology to take into account the fact that PI firms acquired nursing homes at different points in time during 2004 through 2007. In addition, CMS identified a number of alternative analyses that it believed could help to explore the relationship between PI ownership and quality. CMS also acknowledged that the report is an important step toward better understanding the effect of nursing home ownership on the quality of care provided to residents. In general, representatives of the PI firms commented that the report handled a complex topic well and that its conclusions were fair and balanced. Several also commented that our acknowledgement of limitations to our analyses was important. The alternative methodology presented in CMS’s comments would tailor a pre and post analysis to the year prior to each PI firm’s acquisition of a nursing home chain and to a time point after the acquisition. One of the studies we cited used such a methodology. We chose to use a different methodology and believe that the use of different methodologies enhances the understanding of an issue. Our methodology used 2003 (pre) and 2008/2009 (post) for nursing homes acquired by PI firms from 2004 to 2007, irrespective of the specific year in which the acquisition occurred. We selected the 2004 through 2007 timeframe because it was the period of heaviest PI acquisition of nursing home chains. Finally, CMS said that the exclusion of homes acquired from 2004 through 2007 but sold by PI firms by 2009 could have biased our results. However, only 6 homes were excluded because they were sold and another 55 were excluded because we could not verify they were still owned by the acquiring PI firm in 2009. These exclusions represented less than 5 percent of the PI homes we studied. We believe these exclusions were appropriate and that it is unlikely that such a small share of homes would have notably affected our findings. CMS also suggested a number of alternative approaches for exploring the relationship between private investment and quality of care, such as (1) using measures derived from its Five-Star Quality Rating System, (2) examining the citation of serious deficiencies on successive surveys, and (3) studying the association between aggregate staffing payroll and quality of care. We agree that there are other approaches that can be used to study the relationship between ownership and nursing home quality of care. We chose well-defined measures of deficiencies and nurse staffing that we and others have used to study nursing home quality. In a few instances, CMS’s comments did not accurately describe our findings. For example, CMS stated that the increase in capital-related costs at PI-acquired homes from 2003 to 2008 was related largely to improving the attractiveness of facilities—facility modernization—to higher paying residents. However, we concluded that the increase in RN staffing from 2003 to 2009 was a key aspect of PI firms’ strategies to attract higher acuity, higher paying residents. In addition, CMS states that our study shows that CNA and total nurse staffing ratios decreased in PI homes. Rather, we report that average reported CNA ratios for PI homes did not change significantly from 2003 to 2009 and that average reported total nurse staffing ratios for PI homes increased from 2003 to 2009. Finally, we did not find that average total staffing ratios for any PI firms’ homes decreased or were unchanged from 2003 to 2009. Instead, we reported that average total staffing increased in PI homes, although the increase in one firm’s homes was not as great as in other PI homes. Representatives of most of the PI firms who provided oral comments generally told us that the report handled a complex topic well and they appreciated our statement of limitations of our methodology. However, several were concerned that the presentation of the report over- emphasized results that reflected poorly on PI firms. Representatives of two firms specifically mentioned that the report presented negative findings first, saving the more positive results for later and suggested that not everyone would read far enough to learn about the positive findings relative to the PI firms we studied or to read GAO’s conclusions. For example, we discuss total deficiencies and staffing before turning our attention to subsets of these measures—serious deficiencies and RN staffing. In serious deficiencies, PI firms’ homes were comparable to nonprofit homes and in RN staffing they compared favorably to nonprofit homes. However, we believe we present the findings fairly and in a logical order. In addition, representatives of several PI firms provided specific comments on our findings about deficiencies and staffing. Regarding deficiencies cited on standard surveys and complaint investigations, one PI firm representative stated that the survey process resulted in more scrutiny of for-profit homes than nonprofit nursing homes. We consider cited deficiencies, particularly serious deficiencies, important measures of quality of nursing home care and our research has found that they represent real lapses in the care provided. Regarding our analysis of staffing ratios, the representatives of one firm stated that our analysis did not take into account staff efficiency. These representatives said that they had invested in labor saving technology. While staff efficiency may offset the need for more staff, in our analyses we could not measure or control for differences in staff efficiency using our datasets. The representatives of a different firm commented that we did not address changes in therapy staffing, noting that therapy staff had increased in its homes and that this increase offset some of the need for CNA staff. In our analysis of staffing, we chose to focus on nurse staffing because other research has associated it with quality of care. In general, representatives of the PI firms said that our findings on facility costs and margins were consistent with their own analyses. However, representatives of one firm explained that what we called “costs” they considered “investments.” They said that money spent to train staff, modernize facilities, and adopt electronic medical records reduced errors, prevented subsequent costs, and also improved care. On Medicare cost reports, such expenditures are generally known as costs. A different PI firm commented that our finding that capital-related costs were higher when ownership was split was logical because rents for an operator are generally higher than mortgage payments and may result in lower margins and discourage investments in RN staffing. A few PI firms also stated that the Medicare cost reports were not necessarily accurate with respect to capital-related costs. We acknowledged that the data in the Medicare cost reports are self-reported and have limitations, but all nursing homes are subject to the same reporting requirements and limitations and thus these data are comparable across the groups we analyzed. We incorporated technical comments provided by CMS and the representatives of PI firms as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To determine whether nursing homes that are owned by private investment (PI) firms differ from other nursing homes in deficiencies cited on state surveys, nurse staffing levels, or financial performance, we (1) identified nursing homes for which PI firms had acquired the operations or the real estate or both from 2004 through 2007 and (2) compared data from before and after acquisition of these homes to data from other nursing homes, including other for-profit homes and nonprofit homes. In addition, we reviewed published research on the quality and costs of nursing home care, our prior work on nursing homes, and other relevant documentation. We interviewed officials from the Centers for Medicare & Medicaid Services (CMS); representatives of PI firms that acquired nursing home operations, real estate, or both; representatives of companies that operate PI-owned nursing homes; and experts on nursing home quality and costs. This appendix provides information about (1) our data sources and the development of our analytic datasets, (2) our analytic approach, and (3) data reliability and limitations. Based on our earlier work identifying the top 10 PI acquirers of nursing homes, we developed a list of homes acquired by PI firms from 2004 through 2007. We chose 2004 through 2007 as our target acquisition interval because these were the years during which PI firms acquired the greatest number of nursing homes. We obtained data for our outcome variables from CMS. We used CMS’s Online Survey, Certification, and Reporting system (OSCAR) as our source of data regarding deficiencies, nurse staffing, and characteristics of all the nursing homes we analyzed, including PI, other for-profit, and nonprofit homes. OSCAR is the only national, uniform data source that contains this information. We used Medicare Skilled Nursing Facility (SNF) Cost Reports as our source of data regarding the financial performance of nursing homes. These reports are the only publicly available source of financial data on most Medicare providers and are a primary source of data used by CMS and others to examine nursing homes’ financial performance. We identified nursing homes with three types of ownership: PI-owned, other for-profit, and nonprofit. PI-owned nursing homes. We developed a list of nursing homes owned by the top 10 PI acquirers of nursing homes identified in our September 2010 report using information that these firms provided and other sources, such as nursing home chain Web sites. These 10 PI firms accounted for almost 90 percent of the nursing homes that were acquired by PI firms from 1998 through 2008. We included homes for which a PI firm acquired operations, real estate, or both, and were still owned by the acquiring PI firm in 2009. To compare data from before and after acquisition, we excluded homes acquired before 2004 or after 2007. We also reviewed information from the PI firms and other sources to determine whether the same PI firm acquired both the operations and real estate of these homes. When we could not determine whether the same PI firm owned both the operations and the real estate for a particular home—for example, when we knew that a PI firm owned the real estate for most, but not all, of the homes for which it owned operations, but we did not know which specific homes those were—we assigned it to the group with that firm’s usual ownership pattern. Other for-profit and nonprofit homes. We used OSCAR to identify the for-profit and nonprofit nursing homes that we compared to PI homes. To ensure that our comparison groups were appropriate, we excluded homes that were hospital-based or government-owned in 2009 (because they differ from other nursing homes in important ways, including resident needs and financial performance) and homes that were not certified by Medicare in 2009 (because almost all homes owned by the PI firms in our review were Medicare-certified). We also excluded homes for which we could not identify data from both before and after our target acquisition interval. OSCAR also includes data on nursing home characteristics, including profit status; chain affiliation; facility size as indicated by the number of beds certified by Medicare, Medicaid, or both; and state. OSCAR also includes information about the number of residents and their payers, which we used to calculate the percentage of residents whose care was paid by Medicare, Medicaid, or a source other than Medicare or Medicaid, and occupancy rate. We identified separate datasets for our analyses of deficiencies, nurse staffing, and financial performance. Deficiencies. To examine deficiencies, we used OSCAR data. OSCAR includes data about deficiencies that were cited during standard surveys of nursing homes (which are to be conducted, on average, every 12 months) and during complaint investigations, along with the dates of those surveys and investigations, allowing comparison of data from different points in time. Deficiencies identified during either type of survey are placed into 1 of 12 categories, identified by letter, according to the number of residents potentially or actually affected and the degree of relative harm involved. (See table 1.) Throughout this report, we refer to deficiencies at the actual harm and immediate jeopardy levels as serious deficiencies. To examine deficiencies, we sought OSCAR data from a single standard survey of each home from both 2003 and 2009, but used data from alternate years in a small proportion of the nursing homes in our analyses. Specifically, if no state standard survey was available from 2003 or 2009, we substituted data from 1 year later, if available; otherwise, we used data from 1 year before—with the constraint that the data for PI-acquired homes had to be from before the acquisition and at least 1 year after acquisition. For example, if 2009 data were not available for a particular home, we sought 2010 data, if available; otherwise, we used 2008 data with the constraint that the data must be from 1 year after acquisition for PI-acquired homes. We also collected OSCAR data on deficiencies cited during complaint investigations in calendar years 2003 and 2009. To avoid double counting, we excluded any complaint deficiencies that matched a deficiency cited in a standard survey that was conducted within 15 days of the complaint investigation. We refer to all data used in our analyses of deficiencies as having been from 2003 or 2009. We included data from 12,956 nursing homes in our analyses of deficiencies, of which 1,270 were PI-owned in 2009 and had been acquired from 2004 through 2007. Because we used data from 2003 and 2009 for homes acquired anytime from 2004 through 2007, the amount of time between the surveys that identified any deficiencies and PI acquisition varied. In most cases, the surveys were within 3 years of acquisition. Nurse staffing. We calculated four different staffing ratios, that is, nursing hours per resident per day: registered nurse (RN) ratios, licensed practical nurse (LPN) ratios, certified nurse aid (CNA) ratios, and total nurse staffing ratios (i.e., the total number of nursing hours, whether by RNs, LPNs, or CNAs, per resident per day). In each case, we included full-time, part-time, and contract hours, but we excluded hours reported for performing administrative duties or as Directors of Nursing. When calculating CNA staffing, we also included two other types of nursing staff—nurse aides in training and medication aides. We used the same set of nursing homes included in our analyses of deficiencies to analyze nurse staffing, but excluded homes from the staffing analyses if the data related to staffing appeared to represent data entry or other reporting errors. Specifically, we excluded facilities that, in either 2003 or 2009, reported  more residents than beds,  more than 10 percent of the home’s beds as not certified for Medicare  0 total nursing hours per resident per day,  24 or more total nursing hours per resident per day, or staffing and census data that resulted in nurse staffing ratios that were three or more standard deviations above the mean, indicating that they were statistical outliers. We included data from 11,522 nursing homes in our analyses of staffing ratios, of which 1,176 were PI-owned in 2009 and acquired from 2004 through 2007. Financial performance. To examine nursing homes’ financial performance, we used Medicare SNF cost reports to compute three measures:  Facility costs per resident day, defined as the total facility costs— including both operating and capital costs—divided by total resident days.  Capital-related costs per resident day, defined as capital-related costs allocated to nursing home resident care divided by total resident days.  Facility margins, defined as the amount of total facility revenues exceeding total facility costs, divided by total facility revenues. All Medicare-certified nursing homes—or SNFs—must submit cost reports on an annual basis to CMS. The cost report contains provider information—such as facility characteristics, utilization data, costs, and financial data—generally covering a 12-month period of operations based on the provider’s fiscal year. The cost report contains utilization and cost information on Medicare-covered services, and also contains information for services provided to all residents, regardless of payer. We used cost report data for the provider’s fiscal years 2003 and 2008 because fiscal year 2009 Medicare SNF cost reports were not available at the time we collected our data. For PI-acquired homes, we ensured that these data were from before and after acquisition. Our analyses of financial data also required information from OSCAR about facility characteristics such as the percentage of residents whose care was paid by Medicare or Medicaid and occupancy rate. We sought OSCAR data from calendar years 2003 and 2008, and if these data were not available, we substituted data from 1 year after, if available, otherwise 1 year before. We refer to all data used in our analyses of financial performance as having been from 2003 or 2008. We created different datasets to examine our three calculated measures of financial performance. For each measure, we excluded nursing homes if the cost report covered less than 10 or more than 14 months and those that did not have Medicare SNF cost reports or OSCAR data from both time periods. We also excluded nursing homes for which the data appeared to represent data entry or other reporting anomalies or were statistical outliers.  Facility costs. Data for our analyses of facility costs were from 9,616 nursing homes, of which 1,089 were PI-owned in 2009 and acquired from 2004 through 2007. We excluded homes that, in either 2003 or 2008, reported  no facility costs or facility costs per resident day that were more than two times the interquartile range below the 25th or above the 75th percentile.  Capital costs. Data for our analyses of capital-related costs were from 9,707 nursing homes, of which 1,088 were PI-owned in 2009 and acquired from 2004 through 2007. We excluded facilities that, in either 2003 or 2008, reported  no capital-related costs or capital costs per resident day that were more than two times the interquartile range below the 25th or above the 75th percentile.  Facility margins. Data for our analyses of facility margins were from 8,630 nursing homes, of which 955 were PI-owned in 2009 and acquired from 2004 through 2007. We excluded facilities that, in either 2003 or 2008, reported  no facility revenues or missing margins or facility margins that were in the top or bottom 1 percent of all homes we studied, regardless of type of ownership. Table 2 lists the variables we included in our datasets, describes our operational measures of these variables, and identifies the sources of the data we used to calculate these measures. We conducted both aggregated data analyses and analyses of data from specific PI firms’ homes. Unless otherwise specified, all results that we present were statistically significant at the 0.05 level in analyses of adjusted data. We used panel regression models to determine, at the aggregate level, whether nursing homes that were acquired by PI firms from 2004 through 2007 differed significantly, before and/or after the acquisition, from other nursing homes in our outcome variables—deficiencies, nurse staffing levels, or financial performance. Using these models, we compared outcome data from homes with different types of ownership (PI, other for- profit, and nonprofit) at each of two points in time (2003 and 2009 for deficiencies and staffing, and 2003 and 2008 for financial performance) and we examined whether there were differences between years for PI homes and whether any such differences were similar to any differences between years in the other for-profit and nonprofit homes. We included data from before PI acquisition so we could determine whether the postacquisition data reflected preexisting differences. We included data from other types of nursing homes so we could determine whether any changes from before to after acquisition reflected changes that occurred regardless of type of ownership. We also compared data from PI homes for which the same firm acquired both operations and real estate to data from PI homes for which the same firm did not acquire both operations and real estate. Our panel regression models statistically controlled for variables that research has shown can influence nursing home deficiencies, staffing, and financial performance. These variables were (1) the percentage of residents for whom the payer was Medicare in 2003 and 2009; (2) the percentage of residents for whom the payer was neither Medicare nor Medicaid in 2003 and 2009; (3) chain affiliation in 2009; (4) facility size as indicated by the number of beds certified by Medicare, Medicaid, or both in 2009; (5) occupancy rate in 2003 and 2009; (6) market competition in 2003 and 2008; and (7) geographic location (state). We used random effects models rather than fixed effects models to measure not only the change in outcomes for the same nursing home groups over time, but also the difference between groups at each point in time. Moreover, we wanted to accurately reflect the change over time in our control variables and their effects on our outcome variables— something that can be accomplished using a random effects model, but not a fixed effects model. Illustration. To illustrate our analytic strategy, consider the example of reported RN ratios. Unadjusted average (or mean) reported RN ratios are presented in table 3, along with the number of homes in our analyses. Our panel models analyze the data to identify the size and statistical significance of differences between means. Statistical significance is indicated by the probability (P-value) of coefficients calculated by the panel regression for the comparisons it tests. The specific comparisons tested by our panel regressions are based on independent variables and their interactions. Our panel regression models included a main effect for year and a main effect for ownership type (PI, other for-profit, and nonprofit). The models also included an interaction between year and ownership type, which allowed for the comparison of data between different types of ownership at each point in time as well as the difference between years. Therefore, the five terms in the model are year, other for-profit homes, nonprofit homes, year by other for-profit homes, and year by nonprofit homes. The interpretation of the model terms are as follows: (1) the main effect year measures the difference between 2003 and 2009 for PI homes, (2) the main effect for other for-profit measures the difference between PI and other for-profits in 2003, (3) the main effect for nonprofit measures the difference between PI and nonprofits in 2003, (4) the interaction effect of year by other for-profit measures the difference between PI and other for-profits in the change from 2003 to 2009, and (5) the interaction effect year by nonprofit measures the difference between PI and nonprofits in the change from 2003 to 2009. Table 4 shows the results of our panel regression analysis of reported RN ratios without including control variables—that is, the coefficients and associated P-values for tested comparisons. With unadjusted data, the coefficients calculated by the panel regression can be calculated directly from the means in table 3. For example, the coefficient shown in table 4 for the difference between other for-profit homes and PI homes in 2003 is -0.023, which is the difference between the relevant means shown in table 3: 0.275 minus 0.298. As another example, the coefficient shown in table 4 for the change from 2003 to 2009 for PI homes is 0.100, which is the change from 2003 to 2009 for PI homes shown in table 3. Similarly, the coefficient of -0.068 in table 4 indicates the difference in the change in RN ratio from 2003 to 2009 between other for-profit and PI homes and is equal to the difference between the change for other for-profit homes and the change for PI homes shown in table 3: (0.032 minus 0.100). In contrast, table 5 shows the results of a parallel panel analysis of the reported RN ratios using the same independent variables described above, but in this second analysis, we included our control variables. When the regression model includes control variables, coefficients can not be calculated directly from means. The change in key results between table 4 and table 5 reflects the impact of control variables on RN ratios. For example, when we controlled for these variables, we found that the average reported RN ratios for PI homes did not differ significantly from those of other for-profit homes in 2003. To examine differences between means that were not directly addressed in our panel regressions, we conducted chi-square tests. For example, after applying our panel regressions, we used chi-square tests to determine whether there were significant differences between other for- profit and nonprofit homes. Deficiencies. To apply a panel model regression to deficiencies, we first examined the data to select an appropriate statistical model and ensure that the data were consistent with relevant statistical assumptions. Our measure of total deficiencies was a count of how many deficiencies were cited in the nursing home. Count variables can be modeled by a negative binomial regression. Coefficients from a negative binomial model represent the expected log-count of an event and can be transformed into incidence-rate ratios, which represent how much more or less the expected incidence rate is for one group in comparison to another. In this report, we refer to these ratios as total deficiencies. When we examined the data regarding whether a home was cited for a serious deficiency or not, we determined that a different panel regression model was most appropriate. Because a relatively small proportion of nursing homes were cited for serious deficiencies, and most homes with any serious deficiency had no more than two, our measure was whether or not a home had been cited for any serious deficiencies. For such binary outcomes, a logistic regression model is appropriate. Logistic regression model coefficients represent log-odds ratios and can be transformed to odds ratios, which indicate how much more or less likely the odds are for a binary (yes/no) event to occur for one group in comparison to another. In this report, we refer to these ratios as the likelihood of a serious deficiency. Nurse staffing. After excluding nursing homes with staffing ratios that appeared to represent data entry or other reporting errors, the distribution of each staffing ratio approximated a normal distribution, so we used an Ordinary Least Squares panel regression model to analyze these data. Financial performance. After excluding nursing homes with extreme values, the distributions of facility costs per resident day and capital- related costs per resident day were highly positively skewed, that is, they were not distributed normally or symmetrically around the average. We transformed these variables by taking their natural logarithms; the resultant distributions were consistent with the relevant statistical assumptions. We used Ordinary Least Squares panel regression models to analyze the log-transformed values. After excluding nursing homes with extreme values, facility margins approximated a normal distribution, so we used an Ordinary Least Squares panel regression model to analyze the data. We conducted two additional regression analyses of facility margins in which we controlled for case mix (the average acuity of the residents in a nursing home) and other sources of revenue (such as home health or hospice care). We do not report these analyses because each variable was correlated with payer mix and controlling for them did not increase the amount of variability that was accounted for by our models. (a – b) (a – b) (a – b) RN ratio (a – b) (a – b) (a – b) (a – b) (a – b) Facility margins (a – b) - (a – b) (a – b) (a – b) RN ratio (a – b) (a – b) (a – b) (a – b) (a – b) Facility margins (a – b) Notes. Data were adjusted to control for the influence of chain affiliation, payer mix, facility size, occupancy rate, market competition, and state so that one can make comparisons holding these other variables constant. Cell entries indicate the relationship between two values, labeled (a) and (b) in first column. There were three possible relationships between the two values: If (a) was significantly higher than (b), the cell contains a +; if (a) did not differ significantly from (b), the cell is blank; and if (a) was significantly lower than (b), the cell contains a -. Our standard for statistical significance was p < .05. Data regarding deficiencies and nurse staffing were from 2009; data regarding financial performance were from 2008. In addition, to determine whether there were systematic differences among nursing homes owned by PI firms in outcomes we studied, we conducted a series of analyses in which we separately compared each of five PI firms’ homes to all other PI-acquired nursing homes in our study. We restricted our analyses to those homes for which we could identify both the PI owner of operations and real estate and those PI firms for which we determined we had data from a sufficient number of homes.  For three PI firms’ homes, the same PI firm acquired both operations and real estate.  For two PI firms that acquired the nursing home operations, a different PI firm acquired the real estate. In each of five separate analyses, we compared the homes owned by a PI firm to all other PI homes in our larger aggregate analysis, including homes owned by the other firms we studied and any other homes owned by that PI firm (e.g., those for which we could not identify the real estate owner). Again, we statistically controlled for other variables that may influence deficiencies, staffing, and financial performance. Unless otherwise specified, all results that we present were statistically significant at the 0.05 level in analyses of adjusted data. To better understand differences among the nursing homes owned by these PI firms, we also interviewed representatives of PI firms that acquired nursing home operations, real estate, or both, and representatives of companies that operate PI-owned homes and, if their homes were part of our firm-level analyses, we discussed the results for their homes. There are several important limitations to our findings: The results of our analyses can not be generalized beyond the PI-acquired nursing homes in our review. In addition, the differences between PI-acquired and other nursing homes that we observed cannot necessarily be attributed to PI ownership because they may have been caused by other uncontrolled and unquantified variables, such as specific characteristics of the particular sets of homes or particular PI firms in our review or the fact that these homes changed ownership, rather than the effect of PI ownership per se. Moreover, although our data for homes that were acquired by PI firms came from before and after the PI firm acquired them, we cannot assume that any difference we observed between the data from 2003 and the data from 2008 or 2009 were due to acquisition by the PI firm because other things could have occurred between those years. For example, changes we observed could have occurred after 2003, but before acquisition by the PI firm. In addition, each of our measures has limitations: PI ownership. Our sample of PI-acquired homes did not include all PI- owned homes. Specifically, to compare data from before and after acquisition by a PI firm, we excluded PI-owned homes that were acquired before or after our target acquisition interval. Moreover, the 10 PI firms in our sample acquired about 94 percent of the nursing homes that were acquired by PI firms from 2004 through 2007; we could not identify the other approximately 6 percent of PI-acquired nursing homes, and as a result, some homes that we classified as other for-profit or nonprofit homes may have been PI-owned. Deficiency data. We have previously documented inconsistencies in states’ citation of deficiencies. Our analyses controlled for variation across states, but may not have captured all variation associated with state surveys. In addition, deficiency data provide incomplete information about quality of care. Although cited deficiencies indicate problems with the quality of care that were identified during a survey, the absence of cited deficiencies does not necessarily indicate that the quality of care was good because surveyors may have failed to identify and cite actual quality problems. Staffing data. Although OSCAR was the most suitable data source available for our analyses, OSCAR staffing data have several limitations. First, OSCAR provides a 2-week snapshot of staffing and a 1-day snapshot of residents at the time of the survey, so it may not have accurately depicted a facility’s staffing or number of residents over a longer period. Second, staffing is reported across the entire facility, while the number of residents is reported only for Medicare- and Medicaid- certified beds; as a result, our calculations may have overstated staffing ratios for homes with noncertified beds. Third, neither CMS nor the states regularly attempt to verify the accuracy of the OSCAR staffing data, and at least some studies question these data. For example, research in one state suggested systematic inaccuracies, with larger and for-profit homes being more likely to report higher levels of RN staffing in OSCAR than in their audited state Medicaid cost reports. Financial data. Although Medicare cost reports provided the most suitable data for our analyses, they are not routinely audited and are subject to minimal verification, so they may contain inaccuracies. Since the implementation of the Medicare prospective payment system (in 1998 for SNFs), providers are no longer reimbursed directly on the basis of costs, and some have raised concerns that the quality and level of effort providers put into accurately completing Medicare cost reports may have eroded. In addition, the Medicare program limits the amount of capital- related costs that may be reported—for example, by limiting the reporting of certain financing costs associated with acquisition of a facility. If a provider’s financing costs exceed these limits, the provider’s full financing costs cannot be reported. As a result, a portion of the providers’ reported margins may be needed to offset these unreported financing costs. Also, for about one-third of PI homes, our 2008 financial performance data are from less than 1 year after acquisition. Thus, our postacquisition time period may not fully capture any impact of PI ownership on the home’s financial performance. Despite these limitations, our analyses do provide a reasonable basis for comparing deficiencies, nurse staffing, and financial performance of the PI-owned homes we studied to each other and to other types of nursing homes at two points in time. We reviewed all data for soundness and consistency and determined that they were sufficiently reliable for our purposes. We performed data reliability checks on the list of PI homes we compiled, OSCAR, Medicare’s Provider of Services, and Medicare SNF cost report data we used, reviewed relevant documentation, and discussed these data sources with knowledgeable officials and industry experts. We also reviewed published research on the quality and costs of nursing home care, our prior work on nursing homes, and other relevant documentation. We interviewed officials from CMS; representatives of PI firms that acquired nursing home operations, real estate, or both; representatives of companies that operate PI-owned nursing homes; and experts on nursing home quality and costs. In addition to the contact name above, Walter Ochinko, Assistant Director; Dae Park, Assistant Director; Kristen Joan Anderson; Jennie Apter; Ramsey Asaly; Leslie V. Gordon; Dan Lee; Jessica Smith; and Sonya L. Vartivarian made key contributions to this report. Nursing Homes: More Reliable Data and Consistent Guidance Would Improve CMS Oversight of State Complaint Investigations. GAO-11-280. Washington, D.C.: April 7, 2011. Nursing Homes: Complexity of Private Investment Purchases Demonstrates Need for CMS to Improve the Usability and Completeness of Ownership Data. GAO-10-710. Washington, D.C.: September 30, 2010. Poorly Performing Nursing Homes: Special Focus Facilities Are Often Improving, but CMS’s Program Could Be Strengthened. GAO-10-197. Washington, D.C.: March 19, 2010. Nursing Homes: Addressing the Factors Underlying Understatement of Serious Care Problems Requires Sustained CMS and State Commitment. GAO-10-70. Washington, D.C.: November 24, 2009. Nursing Homes: Opportunities Exist to Facilitate the Use of the Temporary Management Sanction. GAO-10-37R. Washington, D.C.: November 20, 2009. Nursing Homes: CMS’s Special Focus Facility Methodology Should Better Target the Most Poorly Performing Homes, Which Tended to Be Chain Affiliated and For-Profit. GAO-09-689. Washington, D.C.: August 28, 2009. Nursing Homes: Federal Monitoring Surveys Demonstrate Continued Understatement of Serious Care Problems and CMS Oversight Weaknesses. GAO-08-517. Washington, D.C.: May 9, 2008. Nursing Home Reform: Continued Attention Is Needed to Improve Quality of Care in Small but Significant Share of Homes. GAO-07-794T. Washington, D.C.: May 2, 2007. Nursing Homes: Efforts to Strengthen Federal Enforcement Have Not Deterred Some Homes from Repeatedly Harming Residents. GAO-07-241. Washington, D.C.: March 26, 2007. Nursing Homes: Despite Increased Oversight, Challenges Remain in Ensuring High-Quality Care and Resident Safety. GAO-06-117. Washington, D.C.: December 28, 2005. Nursing Home Quality: Prevalence of Serious Problems, While Declining, Reinforces Importance of Enhanced Oversight. GAO-03-561. Washington, D.C.: July 15, 2003. Skilled Nursing Facilities: Medicare Payments Exceed Costs for Most but Not All Facilities. GAO-03-183. Washington, D.C.: December 31, 2002. Skilled Nursing Facilities: Available Data Show Average Nursing Staff Time Changed Little after Medicare Payment Increase. GAO-03-176. Washington, D.C.: November 13, 2002. Skilled Nursing Facilities: Providers Have Responded to Medicare Payment System by Changing Practices. GAO-02-841. Washington, D.C.: August 23, 2002. Nursing Homes: Quality of Care More Related to Staffing than Spending. GAO-02-431R. Washington, D.C.: June 13, 2002. Nursing Homes: Sustained Efforts Are Essential to Realize Potential of the Quality Initiatives. GAO/HEHS-00-197. Washington, D.C.: September 28, 2000. Nursing Homes: Aggregate Medicare Payments Are Adequate Despite Bankruptcies. GAO/T-HEHS-00-192. Washington, D.C.: September 5, 2000. Skilled Nursing Facilities: Medicare Payment Changes Require Provider Adjustments but Maintain Access. GAO/HEHS-00-23. Washington, D.C.: December 14, 1999. Nursing Home Care: Enhanced HCFA Oversight of State Programs Would Better Ensure Quality. GAO/HEHS-00-6. Washington, D.C.: November 4, 1999. Nursing Home Oversight: Industry Examples Do Not Demonstrate That Regulatory Actions Were Unreasonable. GAO/HEHS-99-154R. Washington, D.C.: August 13, 1999. Nursing Homes: Proposal to Enhance Oversight of Poorly Performing Homes Has Merit. GAO/HEHS-99-157. Washington, D.C.: June 30, 1999. Nursing Homes: Complaint Investigation Processes Often Inadequate to Protect Residents. GAO/HEHS-99-80. Washington, D.C.: March 22, 1999. Nursing Homes: Additional Steps Needed to Strengthen Enforcement of Federal Quality Standards. GAO/HEHS-99-46. Washington, D.C.: March 18, 1999. California Nursing Homes: Care Problems Persist Despite Federal and State Oversight. GAO/HEHS-98-202. Washington, D.C.: July 27, 1998.
Private investment (PI) firms' acquisition of several large nursing home chains led to concerns that the quality of care may have been adversely affected. These concerns may have been in part due to PI firms' business strategies and their lack of financial transparency compared to publicly traded companies. In September 2010, GAO reported on the extent of PI ownership of nursing homes and firms' involvement in the operations of homes they acquired. In this report, GAO examined how nursing homes that were acquired by PI firms changed from before acquisition or differed from other homes in: (1) deficiencies cited on state surveys, (2) nurse staffing levels, and (3) financial performance. GAO identified nursing homes that had been acquired by PI firms from 2004 through 2007 and then used data from CMS's Online Survey, Certification, and Reporting system and Medicare Skilled Nursing Facility Cost Reports to compare these PI homes to other forprofit and nonprofit homes. For PIacquired homes, GAO also compared homes for which the operations and real estate were owned by the same firm to those that were not. Because research has shown that other variables influence deficiencies, staffing, and financial performance, GAO statistically controlled--that is adjusted--for several factors, including the percent of residents for whom the payer is Medicare, facility size, occupancy rate, market competition, and state. Any differences GAO found cannot necessarily be attributed to PI ownership or acquisition. On average, PI and other for-profit homes had more total deficiencies than nonprofit homes both before (2003) and after (2009) acquisition. PI-acquired homes were also more likely to have been cited for a serious deficiency than nonprofit homes before, but not after, acquisition. Serious deficiencies involve actual harm or immediate jeopardy to residents. From 2003 to 2009, total deficiencies increased and the likelihood of a serious deficiency decreased in PI homes; these changes did not differ significantly from those in other homes. Reported average total nurse staffing ratios (hours per resident per day) were lower in PI homes than in other homes in both 2003 and 2009, but the staffing mix changed differently in PI homes. Staffing mix is the relative proportion of registered nurses (RN), licensed practical nurses (LPN), and certified nurse aides (CNA). RN ratios increased more from 2003 to 2009 in PI homes than in other homes, while CNA ratios increased more in other homes than in PI homes. The increase in RN ratios in PI homes from 2003 to 2009 was greater if the same PI firm acquired both operations and real estate than if not. The financial performance of PI homes showed both cost increases from 2003 to 2008 and higher margins in those years when compared to other for-profit or nonprofit homes. Facility costs as well as capital-related costs for PI homes increased more, on average, from 2003 to 2008 than for other ownership types. The increase was less if the same PI firm acquired both the operations and real estate than if it did not. In 2008, PI homes reported higher facility costs than other for-profit homes (but lower costs than nonprofit homes) and higher capital-related costs than other ownership types. Despite increased costs, PI homes also showed increased facility margins and the increase was not significantly different from that of other for-profit homes. In contrast, the margins of nonprofit homes decreased. Although the acquisition of nursing homes by PI firms raised questions about the potential effects on quality of care, GAO's analysis of data from before and after acquisition did not indicate an increase in the likelihood of serious deficiencies or a decrease in average reported total nurse staffing. The performance of these PI homes was mixed, however, with respect to the other quality variables GAO examined. We found differences among PI-acquired homes that reflected management decisions made by the firms and, to varying degrees, some of the changes in the PI firms we studied were consistent with attempts to increase their homes' attractiveness to higher paying residents. HHS provided CMS's observations on our methodology. CMS suggested an alternative to our "before and after" acquisition methodology to take into account the fact that PI firms acquired nursing homes at different points in time during 2004 through 2007. One of the studies we cited used such a methodology and we believe that the use of different methodologies enhances the understanding of an issue. CMS also identified a number of additional approaches for exploring the relationship between PI ownership and quality. We agree that such approaches merit future attention. CMS also acknowledged that the report is an important step toward better understanding the effect of nursing home ownership on the quality of care provided to residents.
Child care outside the home can take place in different settings: centers, family child care homes, and relatives’ homes. Centers are usually large facilities that typically care for more than 13 children and are located in schools, churches, office buildings, and the like. In contrast, family child care is offered by individuals in their homes to a small number of children—usually fewer than six. These providers can be neighbors, friends, or someone families learn about through friends or advertisements. Relative care is care provided by a person related to the child other than a parent. The flexibility of family child care makes it an attractive choice for parents. In contrast to most centers, family child care providers accept infants and young toddlers. Approximately 23 percent of employed women use family child care for children between the ages of 1 and 2, while 20 percent of employed women use it for children under 1. Family child care providers also usually have longer hours, may provide weekend and evening care, and may accommodate the hours of parents working shifts. They are also more likely to offer part-time care. These features are important to many lesser skilled and lower paid employees who tend to work shifts or other untraditional schedules. Part-time care is useful for those in the type of job-training activities in which AFDC mothers participate. Hence, family child care is a frequent choice among low-income families. Between 18 and 20 percent of children under age 5 of poor, single, working mothers are in family child care. Whether provided in centers or in family child care settings, quality care is care that nurtures children in a stimulating environment, safe from harm. Research has documented the elements of care that are associated with quality. They include providers trained in areas such as early childhood development, nutrition, first aid, and child health; small groups and low child-to-staff ratios; low staff turnover; a variety of age-appropriate materials; space that is safe and free from hazards; and settings that are regulated. Experts believe that characteristics such as these are good predictors of whether quality care is being provided. While only a small proportion of the research conducted in this area has focused specifically on quality in family child care settings, researchers believe that the same characteristics apply to any setting. For many years, researchers have known that child care quality, regardless of the setting, is important to all aspects of children’s development—physical, cognitive, emotional, and social. The quality of these settings in preschool years also has implications for children’s development and success later in school. However, new research documents to an even greater degree that how individuals function from preschool through adulthood “hinges, to a significant extent, on their experiences before the age of three.” Research has also shown that quality child care can be most beneficial to economically disadvantaged children. Factors associated with low-income families—minimal parental education, linguistic isolation, single-parenting—increase a child’s risk of doing poorly in school. Quality child care settings can help poor children overcome some of the environmental deficits they experience. While family child care providers in the United States generally have low child-to-staff ratios, they work in isolation from others, are generally not trained in early childhood development, and tend to be unregulated. Hence, the quality in family child care is considered by experts to be quite variable. A study done by the Families and Work Institute, which found 35 percent of the family care providers in their sample were giving inadequate care, recently highlighted these concerns about quality. Although family child care is used by many employed mothers with young children, states and localities generally do not regulate it as they do center care. One study estimated that approximately 82 to 90 percent of family child care is unregulated in the United States. Hence, many family child care providers operate legally but do not have to meet any standards to protect the children’s safety and health. Experts believe that meeting at least some minimal child care standards as a precondition to providing care is an important step in building quality into all child care settings. If a family child care provider wants to become registered or licensed, the process can sometimes be intimidating and costly, especially relative to the low wages most providers earn. Incentives to become registered or licensed are few and providers may encounter barriers and be uncertain that they can charge parents higher fees if they meet requirements that help them provide higher quality of care. Family child care providers also have difficulty getting the information and resources they need to run a successful business and to enhance the quality of care they provide. For instance, family child care providers may be unaware of child care training available in their communities because they usually are not part of a professional organization or linked to other networks that would keep them informed of training opportunities. If they do learn of such training, barriers may prevent them from participating, especially if they are low-income providers. Barriers include the cost of the training, training schedules that conflict with providers’ hours of operation, training tailored to center care rather than family child care, or language differences. As a result, while training, like regulation, is seen by experts as a critical element in improving the quality of child care, it can be difficult for family child care providers to obtain. Many organizations sponsor initiatives to improve the quality of family child care. While their goals, purposes, and approaches to working with providers may differ, an overarching goal of all these efforts is to support providers by developing their professionalism and enhancing the quality of care they provide. Organizations involved with this work include resource and referral agencies, community-based nonprofit organizations, cooperative extension agencies, and public agencies, to name a few. Some focus on one or two activities, such as training, connecting providers to information and resources about health issues, or helping providers get licensed. Others weave together many activities into a more comprehensive network of support. As discussed later in this report, the organizations put together funding from different sources, both private and public, to support their activities. Since we could not identify a single database that provided a comprehensive listing of initiatives targeted at improving the quality of family child care, we developed one through discussions with experts, literature review, and an information request on Internet. Our database, which consists of 195 family child care quality initiatives, was built primarily on the work conducted by the National Center for Children in Poverty, the Families and Work Institute, the National Council of Jewish Women, and MACRO International. By putting together these different information sources and adding information on other initiatives we found, we believe that we have constructed the largest single database of family child care quality improvement initiatives. However, we could not determine the extent to which our database represents the universe of initiatives nationwide. While the database contains information on a number of the initiatives’ characteristics, we used it primarily to determine the funding sources for each initiative. However, while all the initiatives identified their sources of funding, very few provided the amount of funding from each source. We conducted site visits at 11 initiatives in three states: Georgia, Oregon, and California. The sites, which were highlighted in the literature we reviewed or in our discussions with experts, were judgmentally selected. We also visited family child care programs for three branches of the military—the Army, Navy, and Air Force—at installations in Maryland and Washington, D.C. In addition, we (1) interviewed experts and officials from the Administration for Children and Families, the Head Start Bureau, and the Maternal and Child Health Bureau at HHS; the Department of Defense (DOD); and the Food and Nutrition Service at USDA; (2) reviewed the literature about issues in family child care; and (3) analyzed funding data gathered for our database. We performed our work between April and October 1994 in accordance with generally accepted government auditing standards. Our analysis of the 11 initiatives we visited showed three approaches used to foster quality care: (1) support networks; (2) training, recruitment, and consumer education initiatives; and (3) health initiatives. Regarding the last two categories, the initiatives described here employed more than one activity in working with providers; however, we designated them according to their key or primary activities. Appendix I describes each of the 11 initiatives we visited in detail. Characteristics and activities of the 195 initiatives in our database are shown in figures 1 and 2 (the number of providers participating in the initiatives and the services provided by the initiatives, respectively), and table 1 (the initiatives’ funding sources). Five initiatives we visited seek to create a support network for providers.Typically support networks are part of an organization that, through a coordinator and staff, provides resources, support, and ongoing training to a group of family child care providers. For example, the Foundation Center for Phenomenological Research in California enrolls all of its family child care providers in the Montessori Teacher Education program. This program leads to the completion of requirements for the American Montessori Society diploma. Similarly, DOD’s family child care system has an extensive entry-level and ongoing training system. Support network staff usually make regular visits to provide technical assistance, bring supplies and toys, or conduct training. The network also assists providers in becoming registered or licensed. In addition, all five initiatives link their providers to USDA’s food program, which provides federal subsidies for nutritious meals and snacks served in child care facilities, including family child care homes, as long as the providers are state registered or licensed. The food program also provides regular training and monitoring visits. The five network initiatives also help or encourage providers to become members of local family child care associations or informal support groups. Given the large number of family child care providers, the development of associations—seen by experts as an important way to reach, support, and help train providers—is a key strategy in many initiatives focused on family child care. Research on child care quality shows that the types of activities support networks conduct contribute to enhancing the level of professionalism of the provider and, thus, improve the quality of child care. The funding for these initiatives comes from a full range of sources: private, state, and federal. Two of the initiatives we visited were solely federally funded: the Oakland Head Start Family Child Care Demonstration Project and DOD’s child care system. Three of the initiatives we visited—the Family-to-Family project, the California Child Care Initiative Project, and the Oregon Child Development Fund—focus on a combination of training and recruitment activities or training and consumer education. Additionally, the California and Oregon projects contain explicit and well-developed components for fundraising and disbursing money to various family child care projects across their states. (See app. I.) The Family-to-Family project focused on improving the quality of care in family child care settings in 40 communities nationwide (see app. I). The initiative was sponsored by the Dayton Hudson Foundation, the philanthropic arm of the Dayton Hudson corporation, which fully funded—typically through 2- and 3-year grants—all 40 sites and committed over $10 million to the effort. The initiative was built on a model that incorporated the following strategies: offering training to providers that was specifically tailored for family child care, promoting and supporting provider accreditation and professional associations, and contributing to local consumer education about selecting child care. The initiative identified an organization in each community that would be responsible for implementing and institutionalizing the strategies in the community during the life of the grant. It also launched a nationwide consumer education campaign to help parents recognize quality child care. In doing this, the initiative wanted to create a demand for quality care, thereby prompting the child care market to supply it. We visited one of the initiative’s first sites, located in Salem, Oregon. Staff involved with the project told us that before the Family-to-Family initiative, little work had been done with family child care in the state. For example, Oregon had only a voluntary registration system for family care providers, and provider associations were not very strong or active. According to the staff, the initiative acted as a catalyst in building supports for family child care as evidenced by the birth of the Oregon Child Development Fund, development of a statewide resource and referral system, and state enactment of minimum requirements for family child care settings. The California initiative and the Oregon fund also focus on training and recruitment and, as mentioned earlier, have successful fundraising components. These initiatives use a five-part model that consists of assessing community child care needs, recruiting providers to meet those needs, offering technical assistance so providers can become licensed, providing ongoing training to providers, and giving them ongoing support. These components are implemented by a statewide resource and referral system. However, it became apparent early in the initiatives’ development that more funding was essential to carry out the model, particularly to support the recruitment, training, and networking activities of the various family child care projects. By continually developing funding partnerships with local and nationwide businesses, foundations, and governments, the California initiative has raised $6.8 million in the last 9 years to fund its family child care projects. The Oregon Child Development Fund, which is a replica of the California initiative, was first funded in 1990. Currently, it has raised $500,000, which it leveraged into an additional $1 million for family child care projects in the state. Three of the initiatives we visited were health initiatives that focus on family child care. While their purposes encompass a number of specific goals and objectives, in the broadest sense, all aim at increasing the health and safety practices in family child care homes. Two of the three also have increasing the immunization rates of children in family child care as one of their objectives. All three initiatives plan to use an education strategy to inform providers of health and safety practices and to help link them to other resources. For example, an initiative we visited in Hood River, Oregon, uses two county health departments and the local child care resource and referral agency to provide consultations on health, nutrition, and other related issues to family child care providers in those counties. The health departments provide a public health nurse who makes home visits to providers, answers questions over the telephone, and conducts training sessions on health and nutrition issues. Two of the health initiatives are funded with federal grants from the Maternal and Child Health Services Block Grant. The block grant is administered by the Maternal and Child Health Bureau in HHS. The third initiative receives CCDBG money to fund most of the project; it also uses some immunization planning funds that states receive from the Centers for Disease Control and Prevention, which is part of HHS. The federal government’s role in child care has been primarily one of helping parents pay for child care. For example, of the seven major sources of federal support for child care, six have the primary purpose of subsidizing the cost of care for parents. The seven programs are the (1) Dependent Care Tax Credit, (2) Social Services Block Grant, (3) Child and Adult Care Food Program, (4) Child Care for AFDC, (5) Transitional Child Care, (6) At-Risk Child Care, and (7) CCDBG. Total federal support for these programs amounted to approximately $8 billion in fiscal year 1993. Of the $8 billion, approximately $156 million was for quality support activities, such as training and monitoring, in all types of child care settings. (How much of this amount goes exclusively to quality initiatives for family child care could not be determined.) The largest amount of indirect federal support for child care is provided through the Dependent Care Tax Credit—$2.4 billion in fiscal year 1993—and is provided through the tax code to working individuals. The remaining programs provide direct federal funding to states for child care to be used for the allowable activities established by each funding stream. Table 2 provides more information about these programs. While the tax credit is primarily used by families earning above $20,000 a year, four of the recent federal programs are aimed at poor families: AFDC Child Care, Transitional Child Care, At-Risk Child Care, and CCDBG. These programs are designed to help welfare recipients and working poor families achieve economic self-sufficiency by giving them assistance with child care. Enacted through the 1988 Family Support Act and the 1990 Omnibus Budget Reconciliation Act, these programs made approximately $1.7 billion available to the states in fiscal year 1993. Again, the primary purpose of these programs is to subsidize the cost of child care. The primary purpose of USDA’s Child and Adult Care Food Program is to subsidize the cost of nutritious meals for children in various care settings. It also provides other support such as training and monitoring to providers who become licensed or registered. Unlike the other federal child care programs, USDA food program subsidies received by family child care providers are not exclusively for poor children. The most frequently used source of federal funds to support quality enhancement initiatives in family child care was CCDBG. Eighty of the 195 initiatives in our database, or 41 percent, received CCDBG funds. Unlike other federal child care funding, which only provides subsidies, CCDBG sets aside a small amount of money—5 percent of a state’s total CCDBG grant—that the state is required to spend on quality improvement activities in all types of care settings. For 1993, this would have amounted to approximately $43 million. The allowable activities include some of those provided by the initiatives we visited: training providers, supporting resource and referral agencies, improving licensing and monitoring activities, improving compensation for providers, and helping providers meet state and local child care regulations. While CCDBG quality improvement money must be used for these activities, it is money that is flexible (that is, it is not targeted for a certain population) and accessible to many organizations (that is, different types of groups can apply for it). The other federal funding source most often used to support quality initiatives for family child care was USDA’s Child and Adult Care Food Program. Fifty-eight of the 195 initiatives in our database, or about 30 percent, received food program money. In addition to providing subsidies to family child care providers for nutritious meals and snacks, the program also provides administrative money to the organizations that sponsor the providers. This money goes to supporting staff who train providers on the required nutritional guidelines children’s meals must meet under the program, make periodic monitoring visits, and provide technical assistance to plan menus and fill out reimbursement paperwork. Providers must be state licensed or registered to participate. Because of its unique combination of resources, training, and oversight, experts believe the food program is one of the most effective vehicles for reaching family child care providers and enhancing the care they provide. While federal sources other than CCDBG and USDA’s food program were used by different initiatives for promoting quality in family child care, these sources were used less frequently. We found 43 out of 195 initiatives—22 percent—received funding from other federal sources. These funds were from at least five different programs: the AFDC Child Care program money authorized under the Family Support Act and administered by HHS; the Community Development Block Grant and Public Housing Demonstration Grants administered by the Department of Housing and Urban Development; the Cooperative Extension Service, a USDA program; and the Maternal and Child Health Services Block Grant administered by HHS. These funds tend to be more restricted than CCDBG and USDA food program funds. For example, we found a few initiatives using the AFDC Child Care program money to support their activities, but most of the money was used to subsidize the cost of child care and was only available to these particular initiatives because they served children of AFDC recipients. Similarly, the Community Development Block Grant money for family child care quality initiatives is only available in communities that receive funds from that block grant and then only if the communities have targeted family child care as a priority. In addition to federal money, private dollars have played a major role in funding these initiatives. Private funding came from a variety of sources, including foundations, endowments, businesses, charities, fundraising, and user fees. Of the 195 initiatives in our database, 107, or almost 55 percent, received money from at least one private source; 43 initiatives, or approximately 22 percent, received money only from private sources. For example, two initiatives we visited—the Neighborhood Child Care Network and the Family-to-Family initiative—were originally funded by a large foundation and a private business, respectively. Two other initiatives mentioned earlier, the Oregon Child Development Fund and the California Child Care Initiative, built and manage a funding supply for family child care initiatives in these states. The Oregon fund is financed entirely with private dollars, and only 7 percent of the $6.8 million that the California initiative raised in the last 9 years was federal money. There is growing evidence that the environment in which children grow plays a vital role in supporting or impeding their healthy development. Research shows that children learn from birth—long before they are actually in a classroom—and that their success or failure in that classroom can be, in part, tied to their early environment. Given that many children, especially very young children, are spending significant parts of their day in child care, communities, experts, and policymakers are asking questions about the quality of that care. Experts have had long-standing concerns about the quality of child care in the United States for all types of settings. In light of these concerns, the initiatives we found were engaged in strategies and activities to improve the quality of family child care by providing networks of support and other resources. They gave family child care providers ongoing training, linked them to information and resources, helped them to become registered and to join the USDA food program, provided access to toy-lending libraries, and supported them with staff who made home visits to provide various types of help. Again, research tells us that such activities can significantly enhance the quality of care children receive. Many welfare reform discussions outline plans to require more AFDC recipients to either work or be in education or training programs to help them acquire basic skills for supporting their families. As a result, the number of children needing child care—particularly very young children—is predicted to grow. Since family child care is the choice of a significant proportion of poor families with infants and toddlers, its use is also predicted to grow under various welfare reform scenarios. Given that research shows that quality child care settings particularly benefit poor children, the need for quality in this care will also grow. At your request, we did not obtain written agency comments. However, we discussed our findings with agency officials who generally agreed with the information presented in this report. We are sending copies of this report to the Secretary of Health and Human Services, the Secretary of Agriculture, and to other interested parties. We will make copies available to others on request. Major contributors to this report are listed in appendix II. If you have any questions concerning this report or need additional information, please call me on (202) 512-7215. This appendix contains brief descriptions of the 11 initiatives we visited, including information on the strategies used, the sponsoring organization, the amount of funding received, and the number of providers served by the initiative. The 11 descriptions are categorized as support networks; health initiatives; and training, recruitment, and consumer education initiatives. The Neighborhood Child Care Network, an initiative sponsored by Save the Children in Atlanta, started as a national demonstration project funded by the Ford Foundation. The Network’s goal is to improve the quality and availability of family child care for low-income parents. It has set out to demonstrate what urban communities can do to address child care issues through community organizing and formal and informal training of providers. The Network supports 60 family child care providers in the communities it serves. The Network’s support includes lending libraries from which their providers can borrow books, equipment, and toys; regular home visits from child care specialists who conduct one-on-one training with providers, discuss relevant child care topics such as child development and safety and health issues; assistance with joining the USDA food program, record keeping and other business aspects; monthly training workshops and newsletters that list other training opportunities; scholarships to attend training conferences; and assistance in forming family day care provider associations and obtaining national accreditation. In 1992, the Network expanded its activities to include services for the parents in its family child care network. Through a grant from A.L. Mailman Family Foundation and Primerica, its Parents Service Project uses family child care homes as the parents’ point of entry for delivery of various social services. The Network was funded from 1987 through 1990 with grants from the Ford Foundation that totaled approximately $300,000. Since then, it has received a total of approximately $120,000 in CCDBG money, which has required the Network to curtail some services. Save the Children is an international nonprofit organization whose mission is to improve the lives of poor children and their families. It was founded in 1932 and works in Appalachia, in several southern states, and selected inner-city areas as well as in 43 other countries. The Foundation Center for Phenomenological Research is a nonprofit organization formed in 1974 to help small community organizations strengthen their operations. In 1980, it won its first contract to run a state-funded child care program; currently it runs child care programs in approximately two dozen locations, primarily in California. The site we visited was its Sacramento Delta and Ilocer Migrant and Seasonal Farmworker Family Child Care Project, which supports 20 providers serving approximately 160 children from migrant agricultural workers’ families. The goal of the Foundation Center is to provide quality child care to infants, toddlers, and preschoolers and their families and to improve the children’s school readiness and long-term academic achievement. The Foundation Center provides health services to the children and their families and a full-day education program for the children, and also supports family child care providers. The Foundation Center gives providers employment benefits, including sick and vacation leave, and health insurance; recruits and places eligible children in providers’ homes, helping to complete paperwork requirements for child care funding and USDA’s food program; provides training in the providers’ native languages using the Montessori curriculum so that providers can earn the American Montessori Society teaching credential; and equips each provider’s home with culturally and developmentally appropriate furniture, materials, and toys. Additionally, all children and their families receive free yearly health exams, immunizations, medications, referrals, and follow-up, and are linked to other social services they may need. The Foundation Center’s family child care projects are funded with state dollars through California’s General Child Care funds. The only federal assistance the Foundation Center receives is as a food sponsor through USDA’s food program. It receives a total of approximately $9 million a year from these sources to serve 2,300 children at 20 sites, including family child care projects, in 9 California counties. In 1992, HHS began a demonstration project to determine if family child care could be a viable way to deliver the comprehensive services that are required of Head Start programs. Currently, HHS has funded, for 3 years, 17 Head Start Family Child Care Demonstration Project sites across the country. The demonstration, which includes only 4-year-olds, requires family child care providers to meet the Head Start Performance Standards. At the project site in Oakland, California, the low-income families who participate must be working or in an education or training program, thus requiring more than the half-day services traditionally provided by Head Start centers. All providers in the family day care project offer full-day and year-round care, a primary reason that Oakland applied for the demonstration project. City officials were finding that more and more of the child care needs of their low-income families could not be met with centers that operated only half the day. The 7 providers participating in the Oakland project care for approximately 40 children. Head Start family child care providers participating in the Oakland demonstration received 40 hours of preservice training in 1993 and 80 hours in 1994. After the preservice training, they attend training once a month. In addition, providers receive weekly visits from a child care specialist. These visits, which last from 20 minutes to a few hours, allow the specialist to observe the provider and children, deliver supplies and materials, link the provider with the other Head Start coordinators, and support the provider in other ways. Head Start is a fully federally funded program administered by the Head Start Bureau at HHS. While Head Start of Lane County is a federal Head Start grantee, its family child care model—which uses family child care providers to serve Head Start-eligible children—is funded by the Oregon Pre-Kindergarten Program. The state program, which is a replica of the federal Head Start program, was begun in 1990 as a way to serve more low-income children in a Head Start model. Lane County Head Start officials decided to use family child care providers when they identified a need to provide Head Start services in two rural areas of their county where no Head Start centers were located. At the time of our visit, the program had 20 providers serving 80 children between the ages of 3 and 5. For 1993-94, Lane County Head Start received a state grant of approximately $292,000 to administer the program. While this model is funded with state dollars, the family child care providers are treated as Head Start teachers and, as in the Oakland Head Start Demonstration Project, the care they provide must meet Head Start standards. During 1993-94, each family child care provider received approximately 75 hours of training. Providers also receive visits at least once a week from their Head Start trainer who works with the providers and the children in the providers’ homes. And, because they are part of the Head Start program, the providers are linked with all the Head Start specialists who work with the children and parents enrolled in the center program. The family child care model will not be continued in 1994-95, however. This is due to a reorganization by the grantee, which needs time to focus on its center-based program. However, Lane County Head Start officials told us that they hope to resume the program in the future. As the largest employer in the United States, the military has experienced the same demographic trends in its workforce as other employers: increases in both the number of married personnel with spouses in the workforce and the number of single parents. Because of its flexibility to support the varying work hours of service personnel and to accommodate parental deployment with long-term care, family child care was seen as a viable way to meet the needs of military families. As a result, the four service branches have developed a comprehensive family child care system. DOD’s family child care model contains the same elements other support network initiatives do—ongoing training for providers; visits by home monitors; placement of children; and access to equipment, supplies, and other resources. However, DOD’s system has notable differences, too: the huge organization that sponsors it; the large number of providers it supports (over 12,000 worldwide); the amount of authority it has to screen and monitor providers because they reside in military housing; and the full federal funding it receives. Intensive screening of potential providers and extensive ongoing training for those accepted into DOD’s network are two components of its model that stand out. Orientation sessions are held for prospective providers to familiarize them with the requirements for providing family child care on a base or installation. After the orientation session, the military begins its process of certifying both the provider and the provider’s home. This involves yearly background checks on the provider and members of the household over the age of 12; in-home interviews with the provider and family members; a health, fire, and safety inspection of the home; and quarterly home monitoring visits. Training for providers includes orientation, initial, and annual training requirements. Orientation training must be completed by providers before working with children and covers topics such as child health and safety, age-appropriate discipline, and applicable child care regulations. Once hired as a family child care provider, an individual must complete a minimum of 36 hours of initial training within 6 months of being hired. This training provides more in-depth coverage of topics such as nutrition, cardiopulmonary resuscitation, and child development. After this, providers must complete a minimum number of hours of ongoing training each year; the requirements differ for each service branch. The Atlanta Family Child Care Health and Safety Project, conducted by Save the Children’s Child Care Support Center, is a 3-year project running from October 1993 through September 1996 that is designed to address the increased health and safety risks faced by children in family child care. HHS is providing $300,000 for the project through the Maternal and Child Health Services Block Grant administered by the Maternal and Child Health Bureau. The project’s first goal is to improve the existing system of training and support for child care providers. To accomplish this, project staff will refine an existing health and safety checklist for child care providers and develop educational materials for parents and child care providers that discuss, among other things, safety and health issues in a family child care setting. In addition, project staff will conduct a study of a group of family child care providers to identify barriers they face in meeting health and safety standards as well as identifying barriers to training and other support. Staff will also explore methodologies for collecting information on injury and illnesses occurring in family child care settings. (Currently injury and illness data in child care settings are gathered only for center care.) This research will provide useful information for designing training programs and educational materials on health and safety issues specifically tailored for family child care. The second goal, which is not exclusively focused on safety and health issues, is to bring unregistered family child care providers into the system of registration, training, and support. Project activities related to this goal include increasing provider registration, particularly through registering providers who take care of subsidized children; enrolling providers in USDA’s food program; listing providers with child care resource and referral services; assisting providers in meeting health, safety, and training requirements; and encouraging participation in professional provider associations. Oregon is one of the four states selected to pilot the implementation of guidelines developed by the American Public Health Association (APHA) in conjunction with the American Academy of Pediatrics. A 1-year demonstration project, the Oregon APHA Project, is funded with $20,000 in CCDBG money provided by the state Child Care Division and $10,000 in Immunization Grant money provided by the state Department of Human Services, Health Division. The Immunization Grant is provided to states by HHS’ Centers for Disease Control and Prevention to help states plan and execute community immunization plans. The dual objectives for the demonstration project are to (1) form strong links with public health and other community organizations to establish a planned public health strategy to improve the overall health of children in child care settings and (2) increase the immunization rates of children in such settings. Three Oregon counties, Hood River, Sherman, and Wasco, are involved in the pilot. While the initiative has a number of objectives, those related to family child care include facilitating provider access to ongoing health promotion, protection, and education and giving child care providers home safety assessment tools and necessary child safety items such as safety latches, smoke alarms, and socket plugs. The project is using two county health departments and the local resource and referral agency to carry out the initiative. Through connections made by the resource and referral agency, a part-time public health nurse from the health departments consult with family child care providers on health and safety topics through home visits, phone calls, and training sessions organized by the resource and referral agency. The Family Day Care Immunization Project, sponsored by the Center for Health Training in San Francisco, is a 3-year demonstration project running from October 1993 through September 1996 funded by the Maternal and Child Health Bureau. Annual project funding is $100,000. The specific project goal is to improve immunization rates of children, especially low-income and ethnic minorities, from a sample of family day care homes. Objectives include (1) increasing the knowledge and practice regarding immunization screening for at least 24 health care consultants by September 30, 1994, and (2) developing and testing at least three distinct educational interventions with up to 120 providers to determine their effectiveness in increasing immunization rates and their comparative costs by September 30, 1996. Regarding the first objective, the Center plans to “train the trainers” to conduct training and site visits. Trainers are being recruited from agencies such as the Red Cross and California’s Department of Social Services. The interventions proposed for the second objective will use three control groups: (1) one that will receive only notification letters of state immunization requirements, (2) one that will participate in a 3-hour training session, and (3) one that will receive a 1- to 2-hour site visit to provide information about immunizations. The project will determine which method is the most cost-effective for implementing California’s new law requiring immunizations in family day care settings. The Center is a private, nonprofit company that does health research and training, and provides consultant services about health activities. The California Child Care Initiative Project was begun in 1985 to increase the supply of quality family child care statewide. Originally designed and initiated by the BankAmerica Foundation, the project is a public-private partnership that includes over 473 foundations, corporations, local businesses, and public sector funders. It has raised over $6 million for its mission. The project’s purpose is to fund community-based child care resource and referral agencies to (1) recruit and train new family day care providers and (2) provide start-up and ongoing assistance to help them stay in business. The California Child Care Resource and Referral Network oversees the project’s daily operations and manages its publicity and fundraising activities. The project’s successful and effective fundraising component makes it unique among the initiatives we visited. The Network continually raises funds in the private and public sectors and also coordinates the state of California’s contribution of up to $250,000 per year, matching $1 for every $2 raised from private businesses and federal and local governments. Overall, the project has recruited 3,887 new, licensed family child care homes, making 15,303 new child care spaces available for children of all ages. Since the initiative began, over 25,891 family child care providers have received basic and advanced training in providing quality child care. Because of its success, the project is being replicated in Oregon (see the next section), Illinois, and Michigan. The Portland-based Oregon Child Care Initiative, which is a replica of the California Child Care Initiative, was incorporated to solicit funds from corporate, foundation, and private sources to encourage solutions to family child care issues in Oregon. The primary mission at its inception was to increase access to stable and quality family child care. Efforts to accomplish this broad goal included using proven provider recruitment, training, and retention programs first developed under the California model. In 1992, the initiative evolved into the Oregon Child Development Fund with a broader mission of increasing access to stable, high-quality child education and child care services by concentrating fund raising and distribution in four areas: training and recruitment, consumer education, capital expansion, and accreditation scholarships. As with the California initiative, the Oregon project’s funding mechanism is one of its distinctive components. The Oregon project was originally funded by the Ford Foundation in 1990 with actual start-up in 1991. Currently, it has raised $500,000 in grant funding, which it has leveraged into an additional $1 million in local and state support. According to a representative of the fund, the project is entirely supported by private or business donations. Between 1990 and 1993, the initiative recruited 3,000 family child care providers, trained 3,400 family child care providers, created 18,000 child care slots, and awarded 21 scholarships to providers seeking National Association of Family Child Care accreditation or Child Development Associate credentialing. The Family-to-Family initiative was funded by the Dayton Hudson Foundation, the philanthropic arm of the corporation that owns Mervyn’s and Target department stores throughout the midwest, northwest, and California. In 1988, the corporation executives became concerned about the difficulty employees were having in finding quality family child care and the limited information parents had to identify quality child care. Through its corporate foundation, Dayton Hudson initiated a nationwide campaign to address these issues. The strategy was to promote training, accreditation, and consumer education at selected sites through a collaborative effort with community-based organizations so that these efforts would continue after the initiative ended. The first four sites funded by the initiative were in Oregon; we visited the Salem site. With a $250,000, 2-year grant from Dayton Hudson and through two partners in the community—a community college and the local resource and referral agency—the initiative established a structured training program for family child care providers, promoted and assisted with accreditation, and began a statewide consumer education campaign. In addition, the initiative established a provider council and toy- and equipment-lending libraries for providers. The council was important to help develop provider leadership in the community and to create a forum at which family child care issues could be discussed and strategies could be developed to address them. Toy- and equipment-lending libraries helped subsidize the cost of operation for providers, especially for those caring for infants who needed cribs and other more expensive equipment. One of the most critical and lasting effects of the Family-to-Family initiatives was to establish a structured provider training program at community colleges, resource and referral agencies, USDA community colleges, and other organizations throughout Oregon to make it accessible and transferrable no matter where providers took courses. The courses were designed to satisfy requirements leading to a child development associate’s degree. Lynne Fender, Assistant Director, (202) 512-7229 Janet L. Mascia Alexandra Martin-Arseneau Diana Pietrowiak The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed family child care, focusing on: (1) public and private initiatives to enhance the quality of family child care; (2) the financing of family care initiatives; (3) the federal role in supporting quality initiatives; and (4) the implications of these initiatives for welfare reform. GAO found that: (1) many national initiatives seek to improve family child care quality and are financed both from public and private sources; (2) although most of the $8 billion in federal child care support in 1993 went to subsidies to help parents pay for child care, approximately $156 million was used to improve the quality of child care through 195 different initiatives; (3) the two child care quality initiatives that are used most often are the Department of Health and Human Services' Child Care and Development Block Grant and the Department of Agriculture's Child and Adult Care Food Program; (4) research shows that child care quality improvement activities are critical to enhancing the quality of care in all types of child care settings; (5) family child care is expected to grow and is particularly important to poor children; and (6) these initiatives can provide information on ways to improve quality in family child care settings.
DOD uses three interrelated processes to deliver capabilities to the U.S. military: the Joint Capabilities Integration and Development System (JCIDS), which validates gaps in joint warfighting capabilities and requirements that resolve those gaps; the Defense Acquisition System, which develops and fields weapon systems to meet these requirements; and the Planning, Programming, Budgeting and Execution process, which allocates the funding needed to develop, acquire, and field these weapon systems. The JCIDS process is overseen by the JROC, which supports the Chairman of the Joint Chiefs of Staff in advising the Secretary of Defense on joint military capability needs. The JROC is chaired by the Vice Chairman of the Joint Chiefs of Staff, and includes one senior leader from each of the military services, such as the Vice Chief of Staff of the Army or the Vice Chief of Naval Operations. The JROC has a number of statutory responsibilities related to the identification, validation, and prioritization of joint military requirements. The JROC assists the Chairman of the Joint Chiefs of Staff with a number of tasks, including (1) identifying, assessing, and approving joint military requirements; (2) establishing and assigning priority levels for joint military requirements; and (3) reviewing the estimated level of resources required to fulfill each requirement and ensuring that the resource level is consistent with the requirement’s priority. The JROC also assists acquisition officials in identifying alternatives to any acquisition programs that experience significant cost growth. Since 2008, Congress has added to the JROC’s statutory responsibilities and increased the number of JROC members and advisors who provide input to it. The National Defense Authorization Act for Fiscal Year 2008 amended the U.S. Code to require that the Under Secretary of Defense for Acquisition, Technology and Logistics (USD AT&L), the Under Secretary of Defense (Comptroller), and the Director of the Office of Program Analysis and Evaluation serve as advisors to the JROC on matters within their authority and expertise. In 2009, WSARA expanded the role of the JROC by directing it to assist the Chairman of the Joint Chiefs of Staff in (1) ensuring that trade-offs among cost, schedule, and performance objectives are considered for joint military requirements; and (2) establishing an objective period of time within which an initial operational capability should be delivered. WSARA also stated that the newly constituted Director of Cost Assessment and Program Evaluation (CAPE) would advise the JROC. The Ike Skelton National Defense Authorization Act for Fiscal Year 2011 allowed the Vice Chairman of the Joint Chiefs of Staff to direct senior leaders from combatant commands to serve as members of the JROC when matters related to the area of responsibility or functions of that command are under consideration. It also added the Under Secretary of Defense for Policy, the Director of Operational Test and Evaluation, and other civilian officials designated by the Secretary of Defense as advisors to the JROC on issues within their authority and expertise. The JROC is supported in the JCIDS process by two Joint Capabilities Boards (JCB) and seven Functional Capabilities Boards (FCB), each of which is chaired by a general/flag officer or civilian equivalent. JCBs and FCBs are responsible for specific Joint Capability Areas, such as Force Protection, Logistics, or Battlespace Awareness. The JCBs, FCBs, and associated FCB Working Groups review requirements documents prior to JROC reviews. The JCB also serves as the validation authority for requirements documents that are not associated with major defense acquisition programs (MDAP). In some instances, the JROC will not meet in person to approve requirements documents if there are no outstanding issues to discuss. The JROC and its supporting organizations review requirements documents related to capability gaps and the MDAPs intended to fill those gaps prior to key acquisition milestones. These requirements documents— the Initial Capabilities Documents (ICD), Capability Development Documents (CDD), and Capability Production Documents (CPD)—are submitted by capability sponsors, which are generally the military services, but can also be other DOD agencies or combatant commands. Figure 1 depicts how JCIDS reviews align with the acquisition process. The ICD is the first requirements document reviewed in JCIDS. It is intended to identify a specific capability gap, or set of gaps, in joint military capabilities that are determined to require a materiel solution as a result of a capabilities-based assessment. DOD policy requires that the JROC validate the ICD prior to a Materiel Development Decision, which is the formal entry point into the acquisition process. The ICD does not contain specific cost, schedule, or performance objectives. Once the JROC validates an ICD, the Milestone Decision Authority, working with appropriate stakeholders, shall determine whether to proceed to a Materiel Development Decision. After the Materiel Development Decision, the capability sponsor initiates an AOA to consider alternative solutions to fulfilling the capability need described in an ICD, and possible trade-offs among cost, schedule, and performance for each alternative are considered. The CDD is the second requirements document reviewed in JCIDS. It can address capability gaps presented in one or more ICDs. The CDD is intended to define a proposed program’s Key Performance Parameters (KPP), Key System Attributes (KSA), and other performance attributes. KPPs are the system characteristics that the CDD sponsor considers critical to delivering that military capability, while KSAs are system attributes the CDD sponsor considers essential for an effective military capability, but a lower priority than the KPPs. DOD policy calls for the JROC to validate the CDD to inform the Milestone B decision, which marks the official start of an acquisition program and entry into the engineering and manufacturing development phase. The CDD is the first requirements document that contains cost, schedule, and performance objectives. The CPD is the third and final requirements document reviewed in J It is intended to refine the KPPs, KSAs, and performance attributes validated in the CDD. DOD policy calls for the JROC to validate the CP inform the Milestone C decision, which marks a program’s entry into production. Appendix II identifies the IC the JCB or JROC in fiscal year 2010. CIDS. The JROC considered trade-offs made by the military services before validating requirements for four of the seven proposed programs it reviewed in fiscal year 2010, and provided input to the military services on the cost, schedule, and performance objectives for two of the seven programs. The JROC’s requirements review was the final step in a long requirements vetting process, with most trade-offs being made by the military services earlier in the process. Key stakeholders from the offices of the Under Secretary of Defense (Comptroller), USD AT&L, Director of CAPE, and the combatant commands were all satisfied with their opportunities to provide input to the JROC; but they provided limited input on trade-offs among cost, schedule, and performance objectives, and used other means to influence trade-offs. Perhaps most importantly, none of the JROC’s requirements reviews align with the AOA, which is where the military services reported making the most significant trade-offs. As a result, a program can spend significant time in technology development before the JROC gets to formally weigh in on these trade-offs through the JCIDS process. The JROC also reviews MDAP requirements after a program enters development and experiences substantial cost growth. DOD and the JROC stated that requirements were not the primary causes of cost growth for the 15 programs reviewed for this purpose in fiscal year 2010 and the JROC did not change any KPPs to mitigate the reported cost growth. The JROC considered trade-offs made by the military services before validating requirements for four of the seven proposed programs it reviewed in fiscal year 2010. On three programs, the JROC did not receive information on the potential cost and schedule implications of each of the alternatives considered. Table 1 summarizes the JROC’s consideration of cost, schedule, and performance objectives for the seven proposed MDAPs it reviewed in fiscal year 2010. The JROC’s review of the CDD for a proposed program is the final step in a long requirements vetting process, and DOD officials reported that trade- offs typically occur earlier in the process. Each military service conducts its own internal requirements reviews for its proposed programs, which are used to refine requirements documents before they are submitted into JCIDS. Military service officials reported that they make significant trade- offs during these internal reviews, and that KPPs and technical requirements rarely change after requirements documents are submitted into JCIDS because extensive analysis has already been conducted. For the seven proposed MDAPs we reviewed, the military services generally submitted requirements to the JROC that would be fully funded, provide initial capability within 6 years, utilize critical technologies that were nearing maturity, and be acquired using an incremental approach. These characteristics are consistent with provisions in the Weapon Systems Acquisition Reform Act (WSARA) related to how the requirements process should be structured and aspects of GAO’s best practices for weapon system acquisitions. Two of the proposed program requirements presented to the JROC included major trade-offs among cost, schedule, and performance objectives and revisions to their acquisition approaches that had been made after predecessor programs were cancelled over affordability concerns. The Air Force initiated the HH-60 Recapitalization program after the Combat Search and Rescue Replacement Vehicle (CSAR-X) program was cancelled, and the HH-60 Recapitalization program is expected to decrease cost by changing cabin space, velocity, and range from the CSAR- X requirements. In 2007, the Army, with input from a Functional Capabilities Board, decided to use an incremental acquisition approach for the Ground Soldier System in order to reduce costs, meet schedule demands, and avoid some of the mistakes made during the Land Warrior program, which was cancelled because of funding and cost issues. The JROC received limited input on trade-offs among cost, schedule, and performance objectives from key stakeholders when validating requirements for the seven proposed MDAPs we reviewed from fiscal year 2010. Both WSARA and the National Defense Authorization Act for Fiscal Year 2008 directed the JROC to consult with the Under Secretary of Defense (Comptroller), the USD AT&L, and the Director of CAPE. Additionally, WSARA instructed the JROC to consult with the combatant commands. Officials from these organizations reported that they had ample opportunity to participate in JROC requirements reviews, and Joint Staff officials said efforts to involve these stakeholders preceded WSARA. However, officials from the offices of the Under Secretary of Defense (Comptroller), USD AT&L, and the Director of CAPE also reported that the acquisition and budgeting/funding processes are the primary mechanisms through which they influence programs, rather than JCIDS. For example, CAPE oversees AOAs for MDAPs and has an opportunity to provide input and guidance on AOA considerations. Further, the combatant commands reported that they most often submit prioritized lists of capability gaps directly to the Chairman of the Joint Chiefs of Staff as part of the resource allocation process, which is separate from JCIDS. Nonetheless, joint stakeholders did provide some significant input during the JROC’s reviews of the seven proposed programs in fiscal year 2010. For example, the Army more fully defined a Ground Soldier System, Increment 1 KPP in response to input from DOD’s Joint Interoperability Test Command, and in another instance, the Army added a KSA to the AIAMD SOS, Increment 2 CDD due to input from the office of the USD AT&L, the Defense Information Systems Agency, and the Joint Staff. Neither of these changes involved trade-offs among cost, schedule, and performance objectives. The JROC does not formally review the trade-off decisions made as a result of an AOA until a proposed program’s CDD enters the JCIDS process. According to DOD officials, the most significant trade-offs are made by the military services between ICD and CDD reviews during the AOA, which is intended to compare the operational effectiveness, cost, and risks of a number of alternative potential solutions. For example, during the CVLSP AOA, the Air Force decided to decrease troop transport capacity in order to reduce cost. Alternatively, during the AIAMD SOS AOA, the Army decided to pursue the most costly option reviewed because it provided greater capability. A significant amount of time and resources can be expended before the JROC gets to weigh in on these trade-offs during CDD reviews. For example, the JROC reviewed the AOA summary for JPALS, Increment 2, 4 years after the conclusion of the AOA. During the time between the AOA and the CDD review, the technology intended to enable the chosen alternative is developed. Figure 2 shows the AOA’s relationship to both the requirements and acquisition processes. Joint Staff officials have stated that establishing a JROC review of the AOA would allow it to provide military advice on trade-offs and the proposed materiel solution before Milestone A, and an ongoing Joint Staff review of JCIDS is considering an increased role for the JROC at this point. According to the Joint Staff, increased JROC engagement at these early stages of the acquisition process is warranted to align it with other elements of recent acquisition reforms. For example, WSARA emphasized that the AOA should fully consider possible trade-offs among cost, schedule, and performance objectives for each alternative considered, and in September 2010, USD AT&L issued a memorandum that emphasized the need for trade-offs from a program’s inception. The memorandum also dictated that affordability targets shall be established at the conclusion of the AOA and that these targets will be treated like KPPs, even though they will be set and managed by the acquisition, not requirements, community. The JROC did not change any KPPs during 15 reviews of programs that reported substantial cost growth in fiscal year 2010. According to the Joint Staff, by holding requirements firm and accepting increased cost and schedule delays, the JROC essentially traded cost and possibly schedule for performance. In fiscal year 2010, the JROC reviewed six programs after they experienced a critical Nunn-McCurdy breach and nine programs as part of the tripwire process. During all 15 reviews, DOD and the JROC stated that requirements were not the primary causes of cost growth. For all six programs that experienced a critical Nunn-McCurdy cost breach, the JROC validated the system’s capabilities as being essential to national security and did not make any changes to their KPPs. For all nine programs that were approaching Nunn-McCurdy thresholds, the JROC did not identify opportunities to mitigate cost growth by modifying requirements. Most of these programs were in production in fiscal year 2010, and changing requirements at this late stage might not have mitigated the reported cost growth. When the JROC reviewed the Family of Advanced Beyond Line-of-Sight Terminals program, which was still in development, it concluded that the program’s requirements could not be met in an affordable manner. The JROC did not immediately defer any of the program’s requirements, but instead requested that USD AT&L identify potential alternatives for the program, including reviewing whether adjustments to performance requirements would be appropriate. The military services did not consistently provide high-quality resource estimates to the JROC to support its review of requirements for 7 proposed programs in fiscal year 2010. We found the estimates presented to the JROC were often unreliable when assessed against best practices criteria. The type of resource estimates the military services presented to the JROC varied from ones that had been validated by the military services’ cost analysis agencies to less rigorous rough-orders-of-magnitude estimates. In most cases, the military services had not effectively conducted uncertainty and sensitivity analyses, which establish confidence levels for resource estimates, based on the knowledge available, and examine the effects of changing assumptions and ground rules. Lacking risk and uncertainty analysis, the JROC cannot evaluate the range of resources that might be necessary to cover increased costs resulting from unexpected design complexity, technology uncertainty, and other issues. The lack of this information affects the JROC’s efforts to ensure that programs are fully funded and its ability to consider the resource implications of cost, schedule, and performance trade-offs. The JROC first receives resource estimates for proposed programs when it reviews CDDs, and when we reviewed the CDD resource estimates presented to the JROC in fiscal year 2010, we found that they were generally unreliable when assessed against our best practices criteria. While most of the resource estimates substantially met our criteria for a comprehensive resource estimate, they generally were not very accurate, credible, or well-documented. Appendix IV includes a list of the best practices against which we assessed these resource estimates. The type of resource estimates the military services presented to the JROC varied from ones that had been validated by the military services’ cost analysis agencies to less rigorous rough-orders-of-magnitude estimates. According to Joint Staff officials, military services can initiate CDD reviews at any point in the acquisition process prior to program start, even if good resource estimates are not available. For example, the JROC validated the P-8A, Increment 3 CDD more than 2 years before the program was expected to start, before an AOA had been completed, and with a rough-order-of-magnitude estimate. Joint Staff officials reported that they depend on CAPE to review the quality of resource estimates during the JCIDS process, but CAPE cost assessment officials told us that they rarely participate in JCIDS reviews. Regardless of the type of resource estimate, uncertainty and sensitivity analysis can establish confidence levels for resource estimates, based on the knowledge available at the time, and examine the effects of changing assumptions and ground rules, including those related to trade-offs among cost, schedule, and performance objectives. The military services sponsoring the requirements generally did not effectively meet best practices for uncertainty and sensitivity analyses using the knowledge they had available to them for any of the seven resource estimates we reviewed. Figure 3 summarizes our assessment of the resource estimates presented to the JROC against our best practices criteria. Five of the seven CDD resource estimates substantially met our criteria for a comprehensive resource estimate. The resource estimates generally completely defined their respective programs, and included most, if not all, life-cycle costs. The Ship to Shore Connector, CVLSP, and JPALS, Increment 2 resource estimates also effectively documented all cost- influencing ground rules and assumptions, although the other resource estimates did not. Additionally, only the Ship to Shore Connector’s work breakdown structure effectively met our criteria, which require that work breakdown structures are product-oriented and at an appropriate level of detail. If a resource estimate does not specifically break out common costs, such as government-furnished equipment costs, or does not include an associated work breakdown structure dictionary, cost estimators cannot ensure that the estimate includes all relevant costs. The HH-60 Recapitalization and P-8A, Increment 3 resource estimates did not effectively meet any of our best practices for a comprehensive resource estimate. Unless resource estimates account for all costs, they cannot enhance decision making by allowing for design trade-off studies to be evaluated on a total cost, technical, and performance basis. Additionally, unless ground rules and assumptions are clearly documented, the resource estimate will not have a basis for resolving areas of potential risk. Only two of the seven CDD resource estimates substantially met our criteria for an accurate resource estimate, while three partially met the criteria, and two did not meet or minimally met the criteria. We found that the Ship to Shore Connector, CVLSP, AIAMD SOS, Increment 2, and the Ground Soldier System, Increment 1 resource estimates contained few, if any, minor mistakes, and that the Ship to Shore Connector, CVLSP, and JPALS, Increment 2 resource estimates were appropriately adjusted for inflation. Additionally, we found that the Ship to Shore Connector and JPALS, Increment 2 resource estimates were based on historical records of actual experiences from other comparable programs. However, we generally found that the resource estimates were not consistent with our best practices. Accurate resource estimates are rooted in historical data, which provide cost estimators with insight into actual costs of similar programs, and can be used to challenge optimistic assumptions and bring more realism to a resource estimate. Unless an estimate is based on an assessment of the most likely costs, and reflects the degree of uncertainty given all of the risks considered, management will not be able to make well-informed decisions. Four of the seven CDD resource estimates did not meet or minimally met our criteria for a credible resource estimate, and only the Ship to Shore Connector resource estimate substantially met the criteria. The Ship to Shore Connector and AIAMD SOS, Increment 2 resource estimates included sensitivity analyses that identified a range of possible costs based on varying assumptions, parameters, and data inputs, but none of the other resource estimates included this analysis. As a best practice, sensitivity analysis should be included in all resource estimates because it examines the effects of changing assumptions and ground rules. Since uncertainty cannot be avoided, it is necessary to identify the cost elements that represent the most risk and, if possible, cost estimators should quantify that risk. When an agency fails to conduct sensitivity analysis to identify the effect of uncertainties associated with different assumptions, this increases the chance that decisions will be made without a clear understanding of the impact on cost. Additionally, only the Ship to Shore Connector resource estimate effectively met our best practices for risk and uncertainty analysis. For management to make good decisions, the program estimate must reflect the degree of uncertainty so that a level of confidence can be given about the estimate. An estimate without risk and uncertainty analysis is unrealistic because it does not assess the variability in the resource estimate from effects such as schedules slipping, missions changing, and proposed solutions not meeting users’ needs. Lacking risk and uncertainty analysis, management cannot determine a defensible level of contingency reserves that is necessary to cover increased costs resulting from unexpected design complexity, technology uncertainty, and other issues. Further, none of the planned programs effectively met our criteria for an independent cost estimate when they were reviewed by the JROC. An independent cost estimate is considered one of the best and most reliable resource estimate validation methods because it provides an independent view of expected program costs that tests the program office and service estimates for reasonableness. Without an independent cost estimate, decision makers lack insight into a program’s potential costs because these estimates frequently use different methods and are less burdened with organizational bias. Moreover, independent cost estimates tend to incorporate adequate risk, and therefore tend to be more conservative by forecasting higher costs than the program office. A program estimate that has not been reconciled with an independent cost estimate has an increased risk of proceeding underfunded because an independent cost estimate provides an objective and unbiased assessment of whether the program estimate can be achieved. Alternatively, programs can reinforce the credibility of their resource estimates through cross-checking, which determines whether alternative cost estimating methods produce similar results. However, only the Ship to Shore Connector resource estimate effectively met our best practices for cross-checking. Only the JPALS, Increment 2 resource estimate substantially met our criteria for a well-documented resource estimate, while four of the seven CDD resource estimates partially met our criteria, and two of the resource estimates did not meet or minimally met the criteria. The JPALS, Increment 2 and CVLSP resource estimates sufficiently described the calculations performed and estimating methodologies used to derive each program element’s cost. Additionally, the JPALS, Increment 2, Ship to Shore Connector, and AIAMD SOS, Increment 2 documentation clearly discusses the technical baseline description, and the data in the technical baseline are consistent with the resource estimate. However, none of the documents effectively described how the resource estimates were developed in a manner that a cost analyst unfamiliar with the program could understand what was done and replicate it. We generally found that the resource estimates were not consistent with our best practices for a well-documented resource estimate. Documentation is essential for validating and defending a resource estimate. Without a well-documented resource estimate, a convincing argument of an estimate’s validity cannot be presented, and decision makers’ questions cannot be effectively answered. Poorly documented resource estimates cannot explain the rationale of the methodology or the calculations underlying the cost elements. Further, a well-documented resource estimate is essential for an effective independent review to ensure that the resource estimate is valid and credible. Unless the estimate is fully documented, it will not support reconciliation with an independent cost estimate, hindering understanding of cost elements and their differences. The JROC required the military services to show that the proposed programs were fully funded to the resource estimates presented by the military services before it validated requirements for five of the seven proposed MDAPs we reviewed from fiscal year 2010; the two other proposed MDAPs were funded at more than 97 and 99 percent respectively. However, we found that these resource estimates were generally unreliable, which undermined the JROC’s efforts. In 2007, the JROC issued guidance instructing the military services to commit to funding the requirements that the JROC validates. The guidance emphasized the need for full funding in an effort to facilitate sound fiscal and risk decisions. However, the JROC does not explicitly consider a requirement’s affordability in a broader context during JCIDS reviews. DOD funding plans are captured in the future-years defense program, which presents resource information for the current year and the following 4 years. The future-years defense program is updated twice per year to reflect the military services’ input and the budget the President submits to Congress. Statute and DOD acquisition policy also require programs to be fully funded through the period covered by the future- years defense program. One of the seven proposed MDAPs we reviewed from fiscal year 2010 included a funding shortfall when its requirements were being reviewed through JCIDS, but its CDD was not approved until the shortfall had been addressed. Specifically, when the JCB reviewed the CVLSP CDD, the funding plan included a $1.3 billion shortfall through fiscal year 2015. The JCB chairman directed the Air Force to modify the program’s funding plan before proceeding to the JROC review. When the Air Force briefed the JROC on the CVLSP CDD approximately 8 months later, it presented a funding plan that fully funded the program through the future-years defense program time frame. The revised funding plan also included more money for the program beyond the future-years defense program time frame, and the total program cost increased from $14.2 billion to $15.2 billion. Despite JROC efforts to ensure programs are fully funded, the military services retain primary control over their budgets, and ultimately, JROC decisions are influential but not binding. When the JCB reviewed the JPALS, Increment 2 CDD, it requested clarification on the Air Force’s funding plan, and emphasized the need for full funding prior to program start. The funding plan presented to the JCB included a $77.7 million shortfall through fiscal year 2015, and the Air Force had cut JPALS funding in the past. Following the JCB review, the JROC issued a decision memorandum that documented the Air Force’s commitment to fully funding JPALS, Increment 2. However, in fiscal years 2011 and 2012, the Air Force only funded approximately 30 percent of the resource estimate presented to the JCB. The JROC does not currently prioritize requirements, consider redundancies across proposed programs, or prioritize and analyze capability gaps in a consistent manner. As a result, the Joint Staff is missing an opportunity to improve military service and departmentwide portfolio management efforts. A portfolio management approach to weapon system investments would involve taking a disciplined, integrated approach to prioritizing needs and allocating resources in order to eliminate redundancies, gain efficiencies, and achieve a balanced mix of executable programs. According to Army, Air Force, and Navy officials, having a better understanding of warfighter priorities from the JROC would be useful to inform both portfolio management efforts and service budgets. A DOD review team examining the JCIDS process is considering changes that would address the prioritization of requirements. During its review of the capability gaps presented in 12 ICDs in fiscal year 2010, the JROC did receive some information on priorities and potential redundancies; however, the sponsors presented this information in an inconsistent manner, making it difficult for the JROC to assess the relative priority of capability gaps across different ICDs. Under the current JCIDS process, the JROC does not prioritize requirements or consider redundancies across proposed programs during CDD reviews. In the National Defense Authorization Act for Fiscal Year 2008, Congress amended the U.S. Code to direct the JROC to help assign priority levels for joint military requirements and ensure that resource levels associated with those requirements are consistent with the level of priority. The House Armed Services Committee report accompanying the authorization act stated that clear JROC priorities and budget guidance would allow for joint decision making, as opposed to service-centric budget considerations. In addition, we have previously recommended that DOD develop an analytic approach within JCIDS to better prioritize and balance the capability needs of the military services, combatant commands, and other defense components. According to the Joint Staff and military service officials, prioritization across programs still primarily occurs through the Planning, Programming, Budgeting and Execution process, which is the responsibility of the military services and the Office of the Under Secretary of Defense (Comptroller). The JCIDS manual does not currently require an analysis of potential redundancies during CDD reviews. In our recently issued report on government duplication, we noted that service-driven requirements and funding processes continue to hinder integration and efficiency and contribute to unnecessary duplication in addressing warfighter needs. We have also previously reported that ineffective collaboration precluded opportunities for commonality in unmanned aircraft systems. In fiscal year 2010, the JROC met to consider joint efficiencies between two such systems: the Navy’s Broad Area Maritime Surveillance system and the Air Force’s Global Hawk system. The JROC requested that the Navy and Air Force ensure that a common component was interoperable between the two systems, and that the Air Force consider an all-weather capability developed by the Navy. The JROC has also supported joint development efforts for these programs and requested annual status updates. According to Broad Area Maritime Surveillance program officials, the Air Force and Navy programs are investigating commonality opportunities, including sense-and-avoid capabilities, a consolidated maintenance hub, and basing options for both systems. The JROC did not meet to consider any other joint efficiencies across military services in fiscal year 2010. The Joint Staff has acknowledged that the JROC should play a larger role in prioritizing needs and addressing redundancies. In July 2010, the Vice Chairman of the Joint Chiefs of Staff initiated a review of the JCIDS process. One of the goals of the review team was to develop metrics and criteria to ensure the JCIDS process has the ability to rank or prioritize needs. The review team’s charter states that these metrics must enable more structured reviews of portfolio gaps and redundancies. According to the Joint Staff, the review team is considering a number of recommendations including asking the JROC to prioritize requirements based on the urgency and significance of the need. This list of priorities could be used to inform military service budgets. Joint Staff officials have also stated that redundancies may be addressed more directly in the future as part of an enhanced portfolio management effort. We have previously reported that DOD has not taken a portfolio management approach to weapon system investments, which would involve taking a disciplined, integrated approach to prioritizing needs and allocating resources in order to eliminate redundancies, gain efficiencies, and achieve a balanced mix of executable programs. In September 2010, USD AT&L issued guidance intended to increase efficiencies and eliminate redundancies, and it presented the Army’s portfolio management activities as an example to emulate. The Army uses capability portfolio reviews of capability gaps and proposed and existing programs to revalidate, modify, or terminate requirements and ensure the proper allocation of funds between them. The Army has established 17 portfolios, including aviation, air and missile defense, and combat vehicle modernization. An Army official involved in the portfolio reviews said that he has requested on several occasions for the Joint Staff to prioritize warfighter needs; however, the JROC has not done so. Instead, the Army relies on its own prioritization information during the portfolio reviews to help determine the capability areas where it is willing to assume risk. Air Force and Navy officials have also stated that they could benefit from JROC prioritization of requirements, and that this information would be useful in order to better allocate resources during their budget formulation activities. The JROC has required that capability sponsors prioritize capability gaps and identify redundancies when developing ICDs, and capability sponsors generally complied with these requirements in the 12 validated ICDs we reviewed from fiscal year 2010. However, the sponsors presented this information in an inconsistent manner, making it difficult for the JROC and the military services to assess priorities and redundancies across ICDs or use this information to inform resource allocation decisions. For example, the Electronic Health Record ICD prioritized its gaps in numerical order from 1 to 10, but the Command and Control On-The-Move ICD labeled half its gaps medium priority and the other half high priority. The JCIDS operation manual provides limited guidance on how capability sponsors should prioritize the gaps, stating only that the prioritization should be based on the potential for operational risk associated with the shortfalls. The JCIDS manual also directs capability sponsors to identify redundancies and assess whether the overlap is operationally acceptable or whether it should be evaluated as part of the trade-offs to satisfy capability gaps. Three of the 12 validated ICDs we reviewed from fiscal year 2010 did not address redundancies. Furthermore, only one of these ICDs presented to the JROC in fiscal year 2010 included an evaluation of the overlaps. The JROC did not address these omissions when it validated the documents. In the last several years, Congress has passed legislation to give the JROC a greater role in prioritizing military requirements and shaping sound acquisition programs by encouraging cost, schedule, and performance trade-offs. Taken together, these steps have the potential to improve the affordability and execution of DOD’s portfolio of major defense acquisition programs. However, the JROC has largely left prioritization and trade-off decisions to the military services, despite having a unique, joint perspective, which would allow it to look across the entire department to identify efficiencies and potential redundancies. To more effectively leverage its unique perspective, the JROC would have to change the way it views its role, more regularly engage the acquisition community in trade-off discussions at early acquisition milestones, and more effectively scrutinize the quality of the resource estimates presented by the military services. Until it does so, the JROC will only be a marginal player in DOD’s efforts to align the department’s available resources with its warfighting requirements. To enhance the JROC’s role in DOD-wide efforts to deliver better value to the taxpayer and warfighter, we recommend that the Vice Chairman of the Joint Chiefs of Staff, as chairman of the JROC, take the following five actions: Establish a mechanism to review the final AOA report prior to Milestone A to ensure that trade-offs have been considered and to provide military advice on these trade-offs and the proposed materiel solution to the Milestone Decision Authority. Require that capability sponsors present resource estimates that have been reviewed by a military service’s cost analysis organization to ensure best practices are being followed. Require that capability sponsors present key results from sensitivity and uncertainty analyses, including the confidence levels associated with resource estimates, based on the program’s current level of knowledge. Assign priority levels to the CDDs based on joint force capability gaps and redundancies against current and anticipated threats, and provide these prioritization levels to the Under Secretary of Defense (Comptroller) and the military services to be used for resource allocation purposes. Modify the JCIDS operations manual to require that CDDs discuss potential redundancies across proposed and existing programs, and address these redundancies when validating requirements. The Joint Staff provided us written comments on a draft of this report. The comments are reprinted in appendix V. The Joint Staff also provided technical comments, which we addressed in the report, as appropriate. In its comments, the Joint Staff partially concurred with all five of our recommendations, generally agreeing that there is a need to take action to address the issues we raised, but differing in terms of the specific actions that should be taken. The Joint Staff partially concurred with our recommendation that the Vice Chairman of the Joint Chiefs of Staff, as chairman of the JROC, establish a mechanism to review the final AOA report prior to Milestone A to ensure that trade-offs have been considered and to provide military advice on these trade-offs and the proposed materiel solution to the Milestone Decision Authority. The Joint Staff noted that its ongoing review of JCIDS will include a recommendation that AOA results be briefed to FCBs. However, the FCB will only elevate these briefings to the JCB or JROC on an exception basis. The Joint Staff explained that this approach would allow the JROC to provide more informed advice to a Milestone Decision Authority without adding another round of staffing, an additional JCIDS document, or an official validation of AOA results. We agree that the Joint Staff should seek to implement this recommendation in the most efficient and effective way possible; however, given our finding that the most significant trade-off decisions are made as a result of an AOA, we continue to believe that the results should be reviewed by the JROC. The Joint Staff partially concurred with our recommendation that the Vice Chairman of the Joint Chiefs of Staff require that capability sponsors present resource estimates that have been reviewed by a military service’s cost analysis organization to ensure best practices are being followed. The Joint Staff stated that program office cost estimates are compared to independent cost estimates during CDD reviews. However, none of the seven CDD cost estimates we reviewed effectively met our criteria for an independent cost estimate. As a result, we believe that the Joint Staff needs to take additional action to ensure that resource estimates presented by capability sponsors have been reviewed by a military service’s cost analysis organization. The Joint Staff also stated that its ongoing review of JCIDS will examine how to highlight this area during CDD reviews. The Joint Staff partially concurred with our recommendation that the Vice Chairman of the Joint Chiefs of Staff require that capability sponsors present key results from sensitivity and uncertainty analyses, including the confidence levels associated with resource estimates, based on the program’s current level of knowledge. The Joint Staff stated that our recommendation needs further study to understand the expected outcomes and the required authorities for the JROC, and its ongoing review of JCIDS will examine how to highlight this area. We believe that the JROC cannot fully consider trade-offs or the affordability of a proposed program unless it receives information on the risk and uncertainty associated with resource estimates; it does not need additional authority to require capability sponsors to present the results of this type of analysis before it approves proposed requirements. The Joint Staff also noted that the Director, CAPE, has cost analysis responsibilities for resource estimates. CAPE cost assessment officials reported that they rarely participated in JCIDS reviews. As a result, the JROC may have to be more proactive in reaching out to CAPE to help it understand the risk and uncertainty associated with the resource estimates it receives. The Joint Staff partially concurred with our recommendation that the Vice Chairman of the Joint Chiefs of Staff assign priority levels to CDDs based on joint force capability gaps and redundancies against current and anticipated threats, and provide these prioritization levels to the Under Secretary of Defense (Comptroller) and the military services to be used for resource allocation purposes. The Joint Staff agreed that the identification of joint priorities could enhance a number of processes, including program and budget reviews. It noted that its ongoing review of JCIDS will recommend a prioritization framework through which CDDs will inherit priority levels based on the requirements and capability gaps identified in ICDs or Joint Urgent Operational Needs. However, the Joint Staff argued against prioritizing based on CDDs directly because it would provide less flexibility. We believe that the proposed approach could be effective if the Joint Staff addresses the inconsistencies we found in the way ICDs prioritize gaps. In addition, we continue to believe that the prioritization framework should facilitate an examination of priorities across CDDs. The Joint Staff partially concurred with our recommendation that the Vice Chairman of the Joint Chiefs of Staff modify the JCIDS operations manual to require that CDDs discuss potential redundancies across proposed and existing programs, and address these redundancies when validating requirements. The Joint Staff stated that its ongoing review of JCIDS will address this issue by establishing unique requirements as a higher priority than unnecessarily redundant requirements, and by establishing a post- AOA review, which could also be used to identify unnecessary redundancies. The Joint Staff did not address whether it would update the JCIDS operations manual as recommended and stated that reviewing assessments of redundancies in CDDs would be late in the JCIDS process. We believe that potential redundancies should be discussed at multiple points, including during CDD reviews, because we found that several years can pass between the conclusion of an AOA and this review. During that time, new redundancy issues could emerge. We are sending copies of this report to the Secretary of Defense; the Chairman and Vice Chairman of the Joint Chiefs of Staff; the Secretaries of the Army, Navy, and Air Force; and the Director of the Office of Management and Budget. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members making key contributions to this report are listed in Appendix VI. To conduct our work, we reviewed relevant sections of Title 10 of the U.S. Code, the Weapon Systems Acquisition Reform Act of 2009 (WSARA), and the National Defense Authorization Act for Fiscal Year 2008 to establish the role of the Joint Requirements Oversight Council (JROC) in considering trade-offs among cost, schedule, and performance objectives; reviewing the estimated level of resources needed to fulfill these requirements; and prioritizing requirements. We also reviewed Department of Defense (DOD), Joint Staff, and military service guidance documents, as well as those for the Joint Capabilities Integration and Development System (JCIDS) for developing and validating military requirements, to determine how these roles have been implemented in policy. To determine how these policies have been implemented in practice, we analyzed information and capability documents contained in the Joint Staff’s Knowledge Management/Decision Support tool. To do so, we first established how many requirements documents—Initial Capabilities Documents (ICD), Capability Development Documents (CDD), and Capability Production Documents (CPD)—were reviewed by the JROC and Joint Capabilities Board (JCB) during fiscal year 2010. We selected fiscal year 2010 as our time frame because WSARA was enacted in May 2009, and this would allow for any changes the JROC would implement as result of this legislation. We then focused our analysis on the unclassified requirement documents reviewed by the JROC and JCB in fiscal year 2010 which identified capability gaps or defined performance requirements for new major defense acquisition programs: 13 ICDs and 7 CDDs. We assessed these documents, as well as briefings presented to the JROC or the JCB, associated meeting minutes, and JROC decision memos. We also examined 15 JROC reviews of programs that incurred substantial cost growth after program start in fiscal year 2010 to determine if cost, schedule, and performance trade-offs were made. We chose this time period to allow for any changes the JROC would implement as result of the enactment of WSARA in May 2009. To determine the extent to which the JROC has considered trade-offs among cost, schedule, and performance objectives within programs, we reviewed the seven CDDs submitted to the JROC and analyzed the information presented on trade-offs. We focused on CDDs because they are the first requirements documents that contain cost, schedule, and performance objectives. We also examined JROC decision memos to identify whether the JROC provided input on cost, schedule, and performance objectives for the seven proposed programs and analyses of alternatives (AOA) conducted by the military services prior to JROC reviews. We also met with officials from the Joint Staff; Department of the Air Force; Department of the Army; Department of the Navy; Office of the Director of Cost Assessment and Program Evaluation (CAPE); Office of the Under Secretary of Defense (Comptroller); Office of the Assistant Secretary of Defense for Research and Engineering; and respective program offices about these issues. To obtain combatant command views on their participation in the joint requirements process since the implementation of WSARA, we developed a survey administered to DOD’s 10 combatant commands. The survey addressed a range of topics related to the joint requirements process, including the means for combatant commands to provide information on their capability needs. To understand the Joint Staff’s ongoing internal JCIDS review, we assessed the review charter and met with the Joint Staff officials managing the review to discuss the recommendations from the review and how they might affect the JROC’s consideration of trade-offs. We also observed joint requirements meetings and reviewed prior GAO reports. To determine the quality and effectiveness of efforts to estimate the level of resources needed to fulfill joint military requirements, we assessed the resource estimates used to support the seven unclassified proposed major defense acquisition programs reviewed by the JROC in fiscal year 2010 against the best practices in our cost estimating guide. We used these criteria to determine the extent to which these resource estimates were credible, well documented, accurate, and comprehensive. We scored each best practice as either being Not Met—DOD provided no evidence that satisfies any of the criterion, Minimally Met—DOD provided evidence that satisfies a small portion of the criterion, Partially Met—DOD provided evidence that satisfies about half of the criterion, Substantially Met—DOD provided evidence that satisfies a large portion of the criterion, and Met— DOD provided complete evidence that satisfies the entire criterion. We determined the overall assessment rating by assigning each individual rating a number: Not Met = 1, Minimally Met = 2, Partially Met = 3, Substantially Met = 4, and Met = 5. Then, we took the average of the individual assessment ratings to determine the overall rating for each of the four characteristics. To perform this analysis, we obtained and analyzed program resource estimate supporting documentation, including service cost positions, technical descriptions, work breakdown structures, technology readiness assessments, program schedules, and AOA reports. We also interviewed program and cost estimating officials, when necessary, to gather additional information on these resource estimates and the cost models used to produce them. Each program was also provided with a copy of our assessment of their resource estimates for review and comment. To determine the extent to which the JROC prioritized requirements and capability gaps, we reviewed the 13 ICDs and 7 CDDs submitted to the JROC and any discussions of priorities and redundancies contained in each document. We also met with officials from the Joint Staff; Department of the Air Force; Department of the Army; Department of the Navy; and Office of the Under Secretary of Defense (Comptroller) to discuss the extent to which the JROC and its supporting bodies have addressed prioritization issues. To understand the Joint Staff’s ongoing internal JCIDS review, we assessed the review charter and met with the Joint Staff officials managing the review to discuss the recommendations from the review and how they might affect the JROC’s prioritization of requirements. We also observed joint requirements meetings and reviewed prior GAO reports. We conducted this performance audit from June 2010 to June 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In fiscal year 2010, the Joint Capabilities Boards (JCB) and Joint Requirements Oversight Council (JROC) combined to review a total of 45 new requirements documents, including 11 that were classified, 2 that were information technology programs, and 8 documents that were not associated with major defense acquisition programs. The remaining 24 requirements documents are identified in figure 4. The Joint Requirements Oversight Council (JROC) conducted 15 reviews following cost breaches in fiscal year 2010—6 Nunn-McCurdy reviews and 9 tripwire reviews. Table 2 identifies these reviews. Table 3 below presents the best practice criteria against which we assessed the resource estimates presented to the Joint Requirements Oversight Council during fiscal year 2010 Capability Development Document (CDD) reviews. In addition to the contact named above, Ronald E. Schwenn, Assistant Director; Noah B. Bleicher; Stephen V. Marchesani; Kenneth E. Patton; Karen A. Richey; Anna K. Russell; and Nathan A. Tranquilli made key contributions to this report.
The Weapon Systems Acquisition Reform Act of 2009 (WSARA) directed the Joint Requirements Oversight Council (JROC) to ensure trade-offs among cost, schedule, and performance objectives are considered as part of its requirements review process. WSARA also directed GAO to assess the implementation of these requirements. This report addresses (1) the extent to which the JROC has considered trade-offs within programs, (2) the quality of resource estimates presented to the JROC, and (3) the extent to which the JROC is prioritizing requirements and capability gaps. To do so, GAO analyzed requirement documents reviewed by the JROC in fiscal year 2010, which identified capability gaps or performance requirements for new major defense acquisition programs. GAO also assessed resource estimates presented to the JROC against best practices criteria in the GAO Cost Estimating and Assessment Guide. The JROC considered trade-offs made by the military services before validating requirements for four of the seven proposed programs it reviewed in fiscal year 2010. According to DOD officials, the most significant trade-offs are made by the military services during the analysis of alternatives (AOA), which occurs between the JROC's review of an Initial Capabilities Document (ICD) and its review of a Capability Development Document (CDD). The AOA is intended to compare the operational effectiveness, cost, and risks of a number of alternative potential solutions. The JROC does not formally review the trade-off decisions made as a result of an AOA until it reviews a proposed program's CDD. As a result, the JROC does not have an opportunity to provide military advice on trade-offs and the proposed solution before it is selected, and a significant amount of time and resources can be expended in technology development before the JROC gets to formally weigh in. The military services did not consistently provide high-quality resource estimates to the JROC for proposed programs in fiscal year 2010. GAO found the estimates presented to the JROC were often unreliable when assessed against best practices criteria. In most cases, the military services had not effectively conducted uncertainty and sensitivity analyses or examined the effects of changing assumptions and ground rules, all of which could further the JROC's efforts to ensure that programs are fully funded and provide a sound basis for making cost, schedule, and performance trade-offs. The JROC does not currently prioritize requirements, consider redundancies across proposed programs, or prioritize and analyze capability gaps in a consistent manner. As a result, the Joint Staff is missing an opportunity to improve the management of DOD's joint portfolio of weapon programs. According to Army, Air Force, and Navy officials, having a better understanding of warfighter priorities from the JROC would be useful to inform both portfolio management efforts and service budgets. A DOD review team examining the JROC's requirements review process is considering changes that would address the prioritization of requirements on a departmentwide basis. GAO recommends that the JROC establish a mechanism to review AOA results earlier in the acquisition process, require higher quality resource estimates from requirements sponsors, prioritize requirements across proposed programs, and address potential redundancies during requirements reviews. The Joint Staff partially concurred with GAO's recommendations and generally agreed with their intent, but differed with GAO on how to implement them.
The Results Act is the centerpiece of a statutory framework to improve federal agencies’ management activities. The Results Act was designed to focus federal agencies’ attention from the amounts of money they spend or the size of their workloads to the results of their programs. Agencies are expected to base goals on their results-oriented missions, develop strategies for achieving their goals, and measure actual performance against the goals. The Results Act requires agencies to consult with the Congress in developing their strategic plans. This gives the Congress the opportunity to help ensure that their missions and goals are focused on results, are consistent with programs’ authorizing laws, and are reasonable in light of fiscal constraints. The products of this consultation should be clearer guidance to agencies on their missions and goals and better information to help the Congress choose among programs, consider alternative ways to achieve results, and assess how well agencies are achieving them. fiscal year 1999 budget submissions, which were due to OMB by September 8, 1997. OMB, in turn, is required to include a governmentwide performance plan in the President’s fiscal year 1999 budget submission to the Congress. As required by the Results Act, GAO reviewed agencies’ progress in implementing the act, including the prospects for agency compliance. VA’s August 15, 1997, draft strategic plan represents a significant improvement over the June 1997 draft. The latest version is clearer and easier to follow, more complete, and better organized to focus more on results and less on process. At the same time, VA has still not fully addressed some of the key elements required by the Results Act; the draft plan has a lack of goals focused on the results of VA programs for veterans and their families, such as assisting veterans in readjusting to civilian life; limited discussions of external factors beyond VA’s control that could affect its achievement of goals; a lack of program evaluations to support the development of results-oriented goals; and insufficient plans to identify and meet needs to coordinate VA programs with those of other federal agencies. The draft strategic plan, acknowledging that three of these four elements (results-oriented goals, program evaluations, and agency coordination) have not been fully addressed, does plan to address them. VA has indicated that it views strategic planning as a long-term process and intends to continue refining its strategic plan in consultation with the Congress, veterans service organizations, and other stakeholders. Another challenge for VA is to improve its financial and information technology management, so that the agency’s ongoing planning efforts under the Results Act will be based on the best possible information. VA’s draft strategic plan addresses several financial and information technology issues, such as the need for cost accounting systems for VA programs and the need to improve VA’s capital asset planning. results. VA officials indicated that, based on consultations with staff from the House and Senate Veterans’ Affairs committees, which included input from GAO, the draft strategic plan would be revised to make it clearer, more complete, and more results-oriented. The August 15, 1997, version reflects significant progress in these areas. Instead of presenting four overall goals, three of which were process-oriented, VA has reorganized its draft strategic plan into two sections. The first section, entitled “Honor, Care, and Compensate Veterans in Recognition of Their Sacrifices for America,” is intended to incorporate VA’s results-oriented strategic goals. The second section, entitled “Management Strategies,” incorporates the three other general goals, related to customer service, workforce development, and taxpayer return on investment. In addition, VA has filled significant gaps in the discussions of program goals. The largest gap in the June 1997 draft was the lack of goals for four of the five major veterans benefit programs. The current plan includes goals for each of these programs, stating them in terms of ensuring that VA benefit programs meet veterans’ needs. Finally, the reorganized draft plan increases the emphasis on results. The June 1997 draft appeared to make such process-oriented goals as improving customer service and speeding claims processing equivalent to more results-oriented goals such as improving veterans’ health care. In the August 1997 version, the process-oriented goals remain but have been placed in their own process-oriented section supplementing the plan’s results orientation. At the same time, VA believes that the process-oriented portions of the plan are important as a guide to VA’s management. It considers customer service very important because VA’s focus is on providing services to veterans and their families. The Assistant Secretary for Policy and Planning, in written comments on a draft of our July 1997 letter, stated that VA continues to believe “that processes and operations are important to serving veterans and [VA] will continue to place appropriate emphasis on the areas of customer service, workforce development, and management issues.” VA also contends that the Results Act does not preclude process-oriented goals from its strategic plan. We agree that many of the process issues VA raises are important to its efficient and effective operation and can be included in VA’s strategic plan as long as they are integrated with the plan’s primary focus on results. Perhaps the most significant deficiency in VA’s draft strategic plan, in both the June 1997 and current versions, is the lack of results-oriented goals for major VA programs, particularly for benefit programs. While discussions of goals for benefit programs have been added to the current version, they are placeholders for results-oriented goals that have not yet been developed. The general goals for 4 of the 5 the major benefit program areas—compensation and pensions, education, vocational rehabilitation, and housing credit assistance—are stated in terms of ensuring that VA is meeting the needs of veterans and their families. The objectives supporting VA’s general goal for its compensation and pension area are to (1) evaluate compensation and pension programs to determine their effectiveness in meeting the needs of veterans and their beneficiaries; and (2) modify these programs, as appropriate. For the three other major benefit program areas, the objectives suggest possible results-oriented goals and are supported by strategies aimed at evaluating and improving programs. For example, the objectives under vocational rehabilitation include increasing the number of disabled veterans who acquire and maintain suitable employment and are considered to be rehabilitated. The strategies under this objective include evaluating the vocational rehabilitation needs of eligible veterans and evaluating the effect of VA’s vocational rehabilitation program on the quality of participants’ lives. VA has noted that developing results-oriented goals will be difficult until program evaluations have been completed. Given the program evaluation time periods stated in the draft strategic plan, which calls for evaluations to continue through fiscal year 2002, results-oriented goals may not be developed for some programs for several years. Another difficulty VA has cited is that, for many VA programs, congressional statements of the program purposes and expected results are vague or nonexistent. VA officials cited VA’s medical research and insurance programs as examples of programs with unclear purposes. This is an area where VA and the Congress can make progress in further consultations. individual goals generally did not link demographic changes in the veteran population to VA’s goals. VA’s current draft has added discussions of the implications of demographic changes on VA programs. For example, VA notes that the death rate for veterans is increasing, which will lead VA to explore various options for meeting increased demands for burials in VA and state veterans’ cemeteries. Meanwhile, the goal to ensure that VA’s burial programs meet the needs of veterans and their families is accompanied by a detailed list of specific cemetery construction and land acquisition projects and by a specific target for expanding burials in state veterans’ cemeteries. The discussion of external factors related to this goal focuses on the Congress’ willingness to fund VA’s proposed projects and the cooperation of the states in participating in the State Cemetery Grants Program. What is missing in the draft is a link between the projected increase in veteran deaths and the proposed schedule of specific cemetery projects. Similarly, we recently reported that National Cemetery System strategic planning does not tie goals for expanding cemetery capacity to veterans’ mortality rates and their preferences for specific burial options. We noted that the goals in VA’s June 1997 draft strategic plan were not supported by formal program evaluations. Evaluations can be an important source of information for helping the Congress and others ensure that agency goals are valid and reasonable, providing baselines for agencies to use in developing performance measures and performance goals, and identifying factors likely to affect agency performance. As noted above, VA cites the lack of completed evaluations as a reason for not providing results-oriented goals for many of its programs. The first general goal of VA’s plan is to conduct program evaluations over a period of several years. VA plans to identify distinct programs in each of its 10 major program areas and then prioritize evaluations of these programs in consultation with the Congress, veterans’ service organizations, and other stakeholders. VA expects to complete this prioritization sometime in fiscal year 1998, complete the highest-priority evaluations by the end of fiscal year 2000, and complete at least one evaluation in each of the 10 major program areas by fiscal year 2003. In our comments on the June 1997 draft strategic plan, we noted that VA has not clearly identified the areas where its programs overlap with those of other federal agencies, nor has it coordinated its strategic planning efforts with those of other agencies. Three areas where such coordination is needed (and the relevant key federal agencies) are employment training (Department of Labor), substance abuse (departments of Education, Health and Human Services, and Housing and Urban Development), and telemedicine (Department of Defense). In addition, we noted that VA relies on other federal agencies for information; for example, VA needs service records from the Department of Defense to help determine whether veterans have service-connected disabilities and to help establish their eligibility for Montgomery G.I. Bill benefits. VA’s current draft strategic plan addresses the need to improve coordination with other federal agencies and state governments. This will involve (1) identifying overlaps and links with other federal agencies, (2) enhancing and improving communications links with other agencies, and (3) keeping state directors of veterans’ affairs and other state officials apprised of VA benefits and programs and of opportunities for collaboration and coordination. As we noted in our comments on VA’s June 1997 draft strategic plan, VA has made progress in financial management and information technology. Like other federal agencies, VA needs accurate and reliable information to support executive branch and congressional decision-making. The “Management Strategies” section of VA’s current draft strategic plan addresses some financial management and information technology issues. Since VA has identified the need to devote a portion of its strategic plan to process-oriented goals, it is appropriate that some of these goals should focus on improving its management in these areas. much of its costs were attributable to each of the benefit programs it administers. According to the plan, this system would include two cost accounting systems already in development: VHA’s Decision Support System (DSS) and VBA’s Activity Based Costing (ABC) system. Another goal in the current draft plan is to establish a VA capital policy that ensures that capital investments, including capital information technology investments, reflect the most efficient and effective use of VA’s resources. Achieving this goal involves developing a VA-wide Agency Capital Plan and establishing a VA Capital Investment Board to generate policies for capital investments and to review proposed capital investments based on VA’s mission and priorities. Still another goal is designed to address the need for VA-wide information technology management to facilitate VA’s ability to function as a unified department. Achieving of this goal involves developing a VA-wide information technology strategic plan and a portfolio of prioritized information technology capital investments. In addition, the plan calls for the promotion of crosscutting VA information technology initiatives in order to improve services to veterans. The draft plan’s discussion of information technology addresses one of the information technology issues we have identified as high-risk throughout the federal government—the year-2000 computer problem. Unless corrections are made by January 1, 2000, VA’s computers may be unable to cope with dates in 2000, which could prevent VA from making accurate and timely benefit payments to veterans. VA’s draft plan includes as a performance goal that full implementation and testing of compliant software (that is, software capable of processing dates beyond 1999) will be completed by October 1999. Mr. Chairman, this completes my testimony this morning. I would be pleased to respond to any questions you or Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the draft strategic plan developed by the Department of Veterans Affairs (VA), pursuant to the Government Performance and Results Act of 1993. GAO noted that: (1) VA has made substantial progress in its strategic planning, based in part on consultations with the Congress; (2) however, as with many other agencies, VA's process of developing a plan that meets the requirements of the Results Act is an evolving one that will continue well after the September 30, 1997, deadline for submitting its first strategic plan to the Congress and the Office of Management and Budget (OMB); (3) the August 15, 1997, draft that VA submitted to OMB for review is an improvement over the June 1997 version, because it is easier to follow, places more emphasis on results and less on process, and fills in some major gaps in the June 1997 draft; (4) however, the latest draft strategic plan continues to lack some of the key elements expected under the Results Act; and (5) as with the June 1997 draft, the August 15, 1997, draft lacks results-oriented goals for several major VA programs, lacks a program evaluation schedule, and contains inadequately developed discussions of external factors and the need to coordinate with other federal agencies.
For over 50 years, the savings and loan industry promoted home ownership through home mortgage lending and was the nation’s primary lender in the housing finance market. During the 1980s, the industry ran into financial difficulties, and the number of insolvent savings and loan institutions, also known as thrifts, rose dramatically. Between 1980 and 1988, over 500 thrifts failed—more than three and a half times as many as in the previous 45 years combined. Furthermore, hundreds more thrifts remained insolvent or appeared likely to become insolvent. Faced with a crisis of national dimensions, 1989 legislation, among other things, created RTC as a temporary mixed-ownership government corporation to resolve thrifts that were insolvent or in imminent danger of becoming insolvent. Initially, RTC was given 7 1/2 years to resolve the failed thrifts and dispose of their assets, but subsequent legislation reduced the time RTC will be in existence. The Federal Deposit Insurance Corporation (FDIC) is to inherit from RTC resolution responsibility for any thrifts that fail after July 1, 1995. RTC is scheduled to cease all of its operations on December 31, 1995, when any remaining RTC asset disposition workload and supporting operations are to be transferred to FDIC. As of February 1995, RTC estimated that assets with a book value of approximately $8 billion will be transferred to FDIC for disposition. GAO identified RTC as 1 of 18 high-risk areas that were particularly vulnerable to fraud, waste, and mismanagement. This identification was made mainly because of the large dollar value of the assets under RTC’s control, the heavy reliance to be placed on private sector contractors, and the need for strong management information systems and oversight capabilities. Because RTC has taken actions that improved its operations, the level of risk is not as great as it once was. Thus, as discussed in our February 1995 High-Risk Series report, we removed RTC’s high-risk designation. Also, in our High-Risk report, we stated that the transition of RTC operations and workload to FDIC by January 1996 is a continuing risk. The task of winding down a large and complex organization with thousands of personnel and billions of dollars in assets, while minimizing the adverse consequences, is a very difficult one. For a successful transition, RTC and FDIC will need to ensure that sufficient controls are in place over the assets that will be sold during the final year of RTC’s existence, as well as over the assets that will be transferred to FDIC. It is also important that the transition planners give early attention to the quality of data that FDIC will receive from RTC so that RTC will have sufficient time to prepare for and respond to FDIC’s information needs. Throughout RTC’s existence, its management and support systems have evolved in response to changing conditions and legislative mandates, as well as internal and external criticism of its operations. However, certain problems have continually hampered RTC’s ability to effectively accomplish its mission. These problems included weaknesses in its contracting system that contributed to excessive contract costs and in its automated systems that could not adequately support RTC’s asset management and disposition activities. In December 1993, due to concerns about RTC’s performance, Congress included in the RTC Completion Act a number of reforms to improve the management of RTC. Despite such problems and the difficult economic environment in which RTC had to operate, it has accomplished a great deal in resolving a large number of failed thrifts and selling assets during its relatively short existence. From its inception in August 1989 through December 1994, RTC accepted responsibility for 745 failed thrifts. Figure 1 shows the locations of the thrifts that were placed under RTC’s control. By the end of December 1994, RTC had resolved 744 of these 745 thrifts. It is currently working to resolve one New Jersey thrift; it expects to accomplish this resolution by March 31, 1995. RTC had under its control assets with a total book value of about $463 billion. As of November 30, 1994, RTC had disposed of about 93 percent of these assets ($432 billion) and had about $31 billion in assets remaining in its inventory. As shown in figure 2, RTC has classified most of these remaining assets as hard-to-sell. Performing 1-4 family mortgages ($5 billion) Other performing loans ($5 billion) Delinquent loans ($7 billion) Considered to be hard to sell by RTC. In his March 1993 testimony before the Senate Committee on Banking, Housing, and Urban Affairs, the former Secretary of the Treasury Lloyd Bentsen, speaking in his capacity as Chairman of the Thrift Depositor Protection Oversight Board, outlined a 9-point plan to help RTC improve its management practices. Later, a tenth item—the establishment of an interagency transition task force made up of RTC and FDIC personnel—was added to the plan to address the transfer of RTC’s personnel and systems to FDIC when RTC ceases operations on December 31, 1995. Secretary Bentsen said that such a task force was needed to help ensure an orderly transition to FDIC without impairing RTC’s operations. The RTC Completion Act, which became law in December 1993, included 21 management reforms—those in Secretary Bentsen’s 9-point plan along with 12 others. The establishment of the RTC/FDIC transition task force was not included among the 21 reforms but was required by a separate section in the act. For reporting purposes, we organized the 21 reforms into 4 categories that reflected the organizational components that would be responsible for taking the implementation actions. These categories are (1) RTC general management functions; (2) RTC resolution and disposition activities; (3) RTC contracting, including related MWOB activities; and (4) the Oversight Board reform. Appendix I includes more detailed information on the reforms in these categories and shows the progress RTC and the Oversight Board have made in implementing the 21 management reforms since we issued our interim report in June 1994. Our objectives for this report, as set forth in the RTC Completion Act, were to determine (1) the manner in which the 21 management reforms were being implemented and (2) the progress being made toward achieving full compliance. We accomplished these objectives through (1) interviews with responsible RTC headquarters and field officials and Oversight Board staff and (2) reviews of applicable statutes and RTC and Oversight Board documents, including status reports identifying actions taken to implement the reforms’ requirements, specific policies and procedures designed to implement the reforms, and recent Office of Inspector General (IG) reports that addressed areas related to the management reforms. Also, we obtained supporting documentation to determine the extent to which actions were taken to correct internal control weaknesses and implement audit recommendations and other management reforms. In addition, we used our other ongoing work at RTC to verify that planned actions to implement the reforms had been completed or were in process. For reporting purposes, we classified each of the 21 reforms into one of three status categories: (1) work in progress, (2) action taken/monitoring required, or (3) action completed. From January 18 through January 31, 1995, we discussed a draft of this report with RTC and the Oversight Board. Specifically, we discussed the detailed information on each of the 20 RTC reforms with the RTC senior officials responsible for implementing these reforms or their designated representatives. For the Oversight Board reform, we discussed detailed information with the individual on the Oversight Board staff who is responsible for monitoring the implementation of the reform. In addition, on February 7, 1995, we discussed the contents of the draft report with representatives from RTC’s Office of the CFO and Office of Planning, Research and Statistics, who are responsible for tracking RTC’s progress in implementing the reforms. These individuals agreed that the information in the report provided a fair and accurate summary of the manner in which RTC and the Oversight Board implemented the reforms and the progress they made to achieve full compliance. Also, these individuals agreed with our determinations of the implementation status for each of the 21 reforms. We included their comments where appropriate throughout the report. We did our work from June 1994 through January 1995 in accordance with generally accepted government auditing standards. Appendix II provides more detailed information on our objectives, scope, and methodology. Table 1 shows the implementation status we determined for each of the 10 reforms in this category. For two of the three completed reforms shown in table 1, RTC (1) in April 1993, created the Division of Minority and Women’s Programs and appointed a Vice President to head this division who also serves on RTC’s Executive Committee (reform 4); and (2) in June 1993, appointed a CFO who reports directly to RTC’s Chief Executive Officer (CEO) (reform 5). These actions were completed before the act became law in December 1993. For reform 21, which is the third completed reform, by the time the act became law, RTC had already initiated a program that included establishing client responsiveness units in its field offices. In August 1994, RTC completed updating its client responsiveness policy to emphasize the importance of this function and distributed the policy to all RTC personnel. As shown in table 1, the implementation status for six reforms is action taken/monitoring required. Highlights of some of the actions taken to implement these reforms are listed below. RTC updated its comprehensive business plan in August 1994, in part, to ensure that the requirements of the RTC Completion Act were included in the plan. (Reform 1). RTC established a management decision and audit follow-up process that encompasses all efforts to address findings, implement accepted recommendations, and verify completion of corrective actions. (Reform 9). RTC established and filled the position of Assistant General Counsel (AGC) for Professional Liability who is to manage the investigation, evaluation, and prosecution of all professional liability claims involving RTC and who has since submitted to Congress two semiannual reports that included information on various litigation activities. (Reform 10). RTC established a program to assess the adequacy of its internal controls and issued its annual assessment report on March 31, 1994, identifying internal control weaknesses that needed to be corrected. (Reform 12). RTC ensured that specific senior executive positions were filled. (Reform 13). RTC included in its 1993 annual report information on the expenditure of loss funds and the salaries and other compensation paid to directors and senior executive officers of RTC-controlled thrifts. (Reform 14). The nature of these reforms requires RTC to monitor them to ensure that appropriate future actions it must take are initiated when necessary. For example, to maintain the comprehensive business plan required under reform 1, RTC plans to continue to measure its performance against the goals in the plan and make adjustments in the goals as necessary to reflect changing conditions. Also, to maintain effective internal controls as required by reform 12, RTC plans to continue to assess the adequacy of its internal controls and take actions to correct any weaknesses it, its IG, or we identify. For reform 11, which is in the work in progress category, RTC has implemented a corporate-wide data quality policy requiring program managers to develop data quality action plans. RTC has not yet finished its planned enhancements to the primary information systems that support its financial operations and asset disposition activities. RTC expects to complete this work by the end of March 1995. In addition, RTC is reassessing its efforts to improve the quality of data in its information systems to help ensure that these efforts are properly focused on the data most critical to completing its mission. RTC expects to complete this reassessment by the end of March 1995. Additional details on the manner in which RTC proceeded to implement these reforms, as well as their status, are included in appendix III. The three reforms in this category affect the manner in which RTC markets and attempts to dispose of failed thrifts and specific assets under its control. They are intended to ensure that individual acquirers, small investors, and MWOB firms are given sufficient opportunity to participate in RTC’s thrift resolution and asset disposition activities. Table 2 shows the implementation status we determined for each of the three reforms in this category. As shown in table 2, the implementation status for all three reforms is action taken/monitoring required. For reform 2, RTC issued a memorandum to establish a 120-day period to market real property assets on an individual basis before they may be included in any multiasset sales initiative. The memorandum also required written justifications for including these assets in multiasset sales initiatives if they did not sell during the 120-day period. For reform 3, RTC issued a memorandum informing staff of the requirements to prepare written justifications for selling certain nonperforming real estate loans and other real property. In November 1994, RTC published in the Federal Register its final rule that adopted the policies and procedures for implementing the requirements of reforms 2 and 3. RTC monitors the implementation of these two reforms primarily through its internal control review and program compliance review processes. For reform 17, in July 1994, RTC published in the Federal Register the final rule defining a predominantly minority neighborhood (PMN) as any U.S. postal ZIP code area in which 50 percent or more of the residents are minorities according to the most recent Census data. However, RTC has the discretion to use other data that may indicate more accurate neighborhood boundaries. This rule was the subject of extensive review and debate because its implementation could have a significant effect on the extent to which minority individuals or minority-owned institutions can acquire failed thrifts in PMNs. In addition, RTC established a program that provides minority acquirers of thrifts in PMNs with opportunities to purchase performing 1-4 family mortgage loans. As of February 1, 1995, RTC had sold a total of about $207 million in loans through this program. As required by the RTC Completion Act, we are reviewing RTC’s valuation of loans offered through this program and will report on the results of our review later in 1995. Additional details on the manner in which RTC proceeded to implement these reforms, as well as their status, are included in appendix IV. In this category, we included seven reforms that affect RTC’s contracting activities, including several intended to improve RTC’s contracting system, strengthen its contractor oversight, and ensure that MWOB firms receive sufficient opportunities to obtain RTC contracts. Table 3 shows the implementation status we determined for each of the seven reforms in this category. As shown in table 3, the implementation status for six reforms is action taken/monitoring required. Highlights of some of the actions taken to implement these reforms are listed below. In May 1994, RTC issued a policy memorandum that included guidance on basic ordering agreements, which is designed to ensure a thorough review of source lists for prospective RTC contract solicitations. On February 8, 1995, RTC published in the Federal Register its final rule, which, among other things, defines procedures for ensuring that MWOBs and MWOLFs are not excluded from eligibility for task orders and other contracting activities. (Reform 6). RTC revised the Contracting Policies and Procedures Manual (CPPM) to provide uniform contracting procedures and strengthen contractor oversight. Also, RTC provided additional RTC staff for contracting related activities, issued additional procedures for the oversight of property management subcontractors, and implemented RTC-wide legal services contracting procedures. (Reform 7). RTC has developed specific sanctions, such as contract suspensions, for violations of MWOB/MWOLF subcontracting and joint venture requirements. On February 8, 1995, RTC published in the Federal Register its final rule, which included these sanctions. (Reform 16). On February 8, 1995, RTC published in the Federal Register its final rule establishing required MWOB and MWOLF subcontracting goals for contracts with fees of $500,000 or more. (Reform 18). RTC has revised the CPPM to incorporate the two requirements of this reform that relate to RTC’s competitive bidding procedures and costs to the taxpayer. (Reform 19). RTC issued in August 1994 revised policies and procedures and implementing guidelines designed to ensure that RTC’s Division of Legal Services hires outside counsel only when the requirements of this reform have been met. (Reform 20). For reform 15, which is in the work in progress category, RTC has developed draft guidelines to achieve the goal of a reasonable distribution of contract awards and fees to each minority subgroup of contractors. At the time of our interim report, RTC had planned to issue these guidelines by the end of July 1994. According to an RTC official, the guidelines were not issued in July 1994 mainly because RTC’s efforts were focused on developing the final rule that would implement reforms 6, 16, and 18. Since the final rule was published on February 8, 1995, RTC is preparing the parity guidelines which are scheduled to be issued by the end of March 1995. Additional details on the manner in which RTC proceeded to implement these reforms, as well as their status, are included in appendix V. The establishment of an audit committee was included in Secretary Bentsen’s 9-point plan. The implementation status of this reform, which the RTC Completion Act designated as reform 8, is action taken/monitoring required. By November 1994, three individuals had agreed to serve as members of the audit committee, and the Oversight Board had published a charter that described the duties and responsibilities of the committee. Since the establishment of its charter, the audit committee has held two meetings, one in November 1994 and one in January 1995. Additional details on the manner in which the Oversight Board proceeded to implement this reform, as well as its status, are included in appendix VI. Since our interim report was issued in June 1994, RTC and the Oversight Board have continued to move forward in their actions to implement the 21 management reforms. RTC has completed three reforms, and has work in progress to implement two other reforms. Furthermore, actions have been taken to implement the remaining 16 reforms. While these actions will enable RTC and the Oversight Board to fulfill the reforms’ requirements, monitoring will be needed to ensure full compliance. While RTC has made dramatic progress in reducing its inventory of thrifts and assets, it still had about $31 billion in assets remaining as of November 1994. As of February 1995, RTC estimated that about $8 billion in assets will be transferred to FDIC when RTC ceases operations in December 1995. Further, RTC will be faced with significant challenges in the task of winding down a large and complex organization with thousands of personnel and billions of dollars in assets while attempting to minimize adverse consequences. These responsibilities will require substantial attention from both RTC’s top management and the Oversight Board. In addition, continued attention to the implementation of the reforms should help ensure that the reforms’ intended benefits are achieved to the fullest extent possible before RTC ceases its operations. At this time, we are not making any recommendations for further legislative or administrative actions. However, we will continue to monitor RTC and Oversight Board activities during the final year of operation and the transfer of RTC activities to FDIC. Generally, RTC officials with whom we discussed this report agreed that it provides a fair and accurate summary of the manner in which RTC was implementing the reforms and the progress it has achieved during the year since the act became law. In addition, RTC officials agreed with our assessment of the implementation status for the 20 RTC reforms. During our discussions, RTC officials provided us with information that updated and clarified their actions in implementing various reforms. We included this information in the report where appropriate. The individuals with whom we discussed the reform implemented by the Oversight Board agreed that the information we included in our report about the audit committee provides an accurate summary of the Oversight Board’s efforts to implement this reform. Also, the individuals agreed that the appropriate implementation status for this reform is action taken/monitoring required. We are sending copies of this report to RTC’s Deputy and Acting Chief Executive Officer, the Chairman of the Thrift Depositor Protection Oversight Board, the Chairman of the Federal Deposit Insurance Corporation, and other interested congressional committees and subcommittees. Copies will be made available to others upon request. This report was prepared under the direction of Ronald L. King, Assistant Director, Government Business Operations Issues. Other major contributors to this report are listed in appendix VIII. If you have any questions, please contact me on (202) 736-0479. For reporting purposes, we organized the 21 reforms into 4 categories that reflected the organizational components that would be responsible for taking the implementation actions. These categories are: (1) RTC general management functions; (2) RTC resolution and disposition activities; (3) RTC contracting, including related MWOB activities; and (4) the Oversight Board reform. In the first category—general management functions—we included the 10 reforms that are the responsibility of RTC’s corporate top management.These reforms require RTC to develop and maintain a comprehensive business plan (reform 1); maintain a division of minority and women’s programs (reform 4); appoint a CFO (reform 5); correct problems identified by auditors, including GAO and the RTC IG (reform 9); appoint an AGC for professional liability (reform 10); maintain an effective management information system (reform 11); maintain effective internal controls (reform 12); fill any vacancies that occur in specific senior executive positions (reform 13); itemize specific expenditures for the year, and disclose salaries and other compensation paid during the year to directors and senior executive officers at thrifts under RTC’s control as part of RTC’s annual report (reform 14); and ensure that every field office has a client responsiveness unit (reform 21). In the second category—resolution and disposition activities—we included the three reforms that are the responsibility of RTC’s Vice Presidents of Asset Management and Sales, and Resolutions. These reforms require RTC to: revise marketing procedures for disposing of real property (reform 2), justify asset disposition methods used to sell certain real property and nonperforming real estate loans (reform 3), and give preference to minority acquirers of thrifts in PMNs (reform 17). In the third category—contracting and related MWOB activities—we included the seven reforms that are the responsibility of RTC’s Vice Presidents for Contracts, Oversight and Evaluation; Minority and Women’s Programs; and Legal Services. These reforms require RTC to revise contracting procedures for basic ordering agreements to ensure that small businesses and MWOBs are not inadvertently excluded (reform 6); maintain procedures and uniform standards for contracting with private contractors and overseeing contractors’ and subcontractors’ performance (reform 7); establish guidelines for achieving the goal of a reasonably even distribution of contracts awarded and fees paid to various MWOB and MWOLF subgroups (reform 15); prescribe regulations specifying sanctions, including contract penalties and suspensions, for subcontracting and joint venture violations (reform 16); set procedures and goals for MWOB and MWOLF subcontracting (reform 18); ensure that, in awarding competitively bid contracts, procedures used are no less stringent than those in effect when the RTC Completion Act became law in December 1993 (reform 19); and improve the management of legal services (reform 20). The fourth category contains a single reform that requires the Oversight Board to establish an audit committee to monitor and advise RTC on its efforts to improve internal controls and implement audit recommendations. The Oversight Board is responsible for implementing this reform. (Reform 8.) As shown in table I.1, RTC and the Oversight Board have made progress in implementing the management reforms since our interim report was issued in June 1994. Table I.1: Progress in Implementing the Management Reforms Since the Interim Report Was Issued in June 1994 This is the reform number from the RTC Completion Act. (See apps. III through VI.) Our objectives, as set forth in the RTC Completion Act, were to determine (1) the manner in which RTC and the Oversight Board were implementing the 21 management reforms mandated by the act and (2) the progress being made by RTC and the Oversight Board toward achieving full compliance. The act required that we issue an interim report with our preliminary findings 6 months after the RTC Completion Act became law in December 1993, and a final report. To accomplish these two objectives, we reviewed RTC’s management reform status reports to identify actions taken to implement the reforms’ requirements. After identifying the actions, we interviewed responsible RTC officials and Oversight Board staff to obtain information on the status and progress being made in implementing them. The officials we interviewed were in the following RTC headquarters divisions: Administration; Asset Management and Sales; Contracts, Oversight and Evaluation; Resolutions; CFO; Legal Services; and Minority and Women’s Programs. We also interviewed RTC officials in the Department of Information Resources Management (DIRM); Office of Planning, Research and Statistics; and Office of IG. Also, we interviewed field office officials in Atlanta; Dallas; Denver; Kansas City; Newport Beach, CA; and Valley Forge, PA; to verify the status and progress of the actions being implemented at field locations. We reviewed supporting documents for evidence that planned actions had been completed, as well as recently issued reports by RTC’s IG covering the management reform areas. We also monitored the monthly Oversight Board meetings at which RTC reported its progress in implementing the reforms. To determine whether internal control corrective actions had been completed as reported, we randomly selected 50 of 191 completed actions and reviewed the supporting documentation. Further, we used our other ongoing work at RTC to verify that 27 additional actions had been completed. On the basis of information obtained from RTC and the Oversight Board, each reform was classified into one of the following three status categories: (1)work in progress (i.e., some planned actions have been implemented and others are under way); (2)action taken/monitoring required (i.e., planned actions have been taken to fulfill the requirements of the reform, but monitoring is needed to ensure full compliance); and (3)action completed (i.e., all planned actions have been implemented). From January 18 through January 31, 1995, we discussed a draft of this report with RTC and the Oversight Board. Specifically, we discussed the detailed information on each of the 20 RTC reforms with the RTC senior officials responsible for implementing these reforms or their designated representatives. For the Oversight Board reform, we discussed detailed information with the individual on the Oversight Board staff who is responsible for monitoring the implementation of the reform. In addition, on February 7, 1995, we discussed the contents of the draft report with representatives from RTC’s Office of the CFO and Office of Planning, Research and Statistics, who are responsible for tracking RTC’s progress in implementing the reforms. These individuals agreed that the information in the draft report provided a fair and accurate summary of the manner in which RTC and the Oversight Board implemented the reforms and the progress they made to achieve full compliance. Also, these individuals agreed with our determinations of the implementation status for each of the 21 reforms. We included their comments where appropriate throughout the report. Requirements of the Reform: This reform requires that RTC establish and maintain a comprehensive business plan covering RTC’s operations, including the disposition of assets, for the remainder of its existence. RTC developed a comprehensive business plan that set forth the major goals to be achieved during the remainder of its existence. The plan was submitted to Congress on December 15, 1993. It established the following six goals for RTC to strive for in completing the thrift clean up. Minimize losses on resolutions of failed thrifts. Maximize recoveries from asset disposition while minimizing the impact on local markets and preserving the availability of affordable housing. Maximize opportunities for minorities and women in all RTC activities. Strengthen safeguards against waste, fraud, and mismanagement. Pursue professional liability cases on a cost effective basis and refer criminal cases to the Department of Justice. Terminate RTC operations and transfer personnel, assets, and systems to FDIC by December 31, 1995. Depending on RTC’s accomplishments, the business plan is to be revised where needed. RTC’s Office of Planning, Research and Statistics is responsible for maintaining the business plan and updating it as circumstances warrant. In June 1994, RTC provided to the Oversight Board a new report that provided detailed information on the extent to which RTC was achieving the plan’s goals. This report, which is to be prepared quarterly, is used to monitor RTC’s performance against the plan. For example, as shown in Figure III.1, the 1994 quarterly reports include information comparing RTC sales and collections goals with actual results. Refers to section 21A of the Federal Home Loan Bank Act, which was amended by section 3 of the RTC Completion Act. The reforms in appendixes III through VI are numbered as they are in the RTC Completion Act. Billions of dollars (cumulative figures) In August 1994, RTC issued an updated business plan. The revised plan incorporated the requirements of the RTC Completion Act management reforms that were not included in the original plan. For example, RTC changed its asset disposition priorities for performing 1-4 family mortgage loans to include the minority preference resolutions program. Also, asset sales projections were updated. For example, for 1994, total projected book value reductions from sales and collections increased from $35.7 billion to $43.8 billion and for 1995, decreased from $15.2 billion to $12.1 billion. The underlying economic assumptions and annual asset sales goals in the revised plan generally appear to be reasonable. However, as discussed in our report entitled Resolution Trust Corporation: Data Limitations Impaired Analysis of Sales Methods (GAO/GGD-93-139, Sept. 27, 1993), without consistent and comprehensive sales and related financial data for individual asset dispositions, which RTC does not have, it cannot accurately measure the effectiveness of its sales strategies. Requirements of the Reform: This reform requires that RTC maintain a division of minorities and women programs. Also, RTC is required to establish the head of this division as a vice president and member of RTC’s Executive Committee. This reform was fully implemented before the RTC Completion Act became law. In April 1993, RTC elevated the Assistant Vice President of the Department of Minority and Women’s Programs to Vice President and moved the program up in the organizational level to the Division of Minority and Women’s Programs. As a Vice President, the head of the division serves on RTC’s Executive Committee. Requirements of the Reform: This reform requires RTC’s CEO to appoint a CFO. The CFO is to have no operating responsibilities other than as CFO and is to report directly to RTC’s CEO. In addition, the CFO will have similar authority and duties pursuant to the Chief Financial Officers Act of 1990that the Oversight Board determines to be appropriate for RTC. This reform was implemented before the RTC Completion Act became law. On June 1, 1993, RTC appointed a CFO who reports directly to RTC’s CEO and is responsible for all RTC accounting and financial management activities. Along with this appointment, RTC consolidated various accounting and financial management functions into a division headed by the CFO and placed specific units under the CFO’s direction. These units included the offices of Budget and Planning, Management Control, Field Accounting and Asset Operations, and Accounting Services. Also, the financial service centers at the four main RTC field offices in Atlanta, Dallas, Denver, and Kansas City report directly to the CFO. In addition, the CFO made changes to enhance RTC’s efforts to strengthen and improve internal control systems. These changes included the following: Developing and implementing systems to monitor ongoing audits, assuring appropriate monitoring and reporting to management of findings related to internal control systems, and tracking the progress of timely and effective corrective actions. Setting up quality assurance units in the financial service centers with direct reporting responsibility to the Vice Presidents, who in turn report to the CFO. Allocating additional resources to the internal control function in order to assure that the commitment to improve and strengthen internal control is achieved. Developing and presenting a required nationwide internal control training program for all RTC management personnel. In our report, Resolution Trust Corporation: Status of Management Efforts to Control Costs (GAO/GGD-94-19, Oct. 28, 1993), we recommended that RTC support its newly appointed CFO in efforts to control costs, strengthen the use of the budget process as a fiscal control tool, and improve the usefulness of expense accounting information so it could be used as a managerial tool. In response to our recommendations, the CFO was given clear authority over all agency financial functions, including cost control, and several financial integrity initiatives were implemented. In March 1994, the CFO informed us that RTC estimated that its efforts, up to that date, in implementing our cost control audit recommendations had resulted in cost savings of about $30 million in the operations of three financial service centers. These savings were achieved by renegotiating with contractors for better rates, consolidating and standardizing contracts, as well as improving centers’ operational efficiencies. In September 1994, the CFO advised us that RTC had strengthened its budget process to better control and reduce expenses. Due in part to measures implemented to control expenses, RTC’s spending against its 1994 budget of $2.64 billion for noninterest expenses was about $2.20 billion, or 17 percent ($437 million) under budget. Furthermore, the CFO’s operating philosophy was designed to improve RTC’s responsiveness to audit findings in general. This operating philosophy consists of the following: Encouraging positive and concise responses to audit findings and recommendations. Utilizing audit findings to assist in managing RTC. Making a strong commitment to taking corrective actions for improvements. Encouraging external audit entities to report issues to RTC management for early resolution of control weaknesses or cost recovery. Maintaining a strong audit control and follow-up system. Requirements of the Reform: This reform requires RTC to respond to problems identified by auditors of its financial and asset disposition operations, including problems identified in IG, GAO, and the Oversight Board’s audit committee reports; or to certify to the Oversight Board that no action is necessary or appropriate. Under Secretary Bentsen’s 9-point plan, RTC was directed to implement a system—such as is required under Office of Management and Budget guidelines for executive agencies—to provide prompt, systematic, and effective follow-up on the findings and recommendations contained in the audit reports. As of December 31, 1994, GAO, IG, and RTC’s Office of Contractor Oversight and Surveillance (OCOS) had issued a combined total of 835 audit reports as shown in Figure III.2. At the beginning of October 1994, the three audit organizations had collectively 475 audits under way. In addition, the IG had plans for another 125 audits and OCOS had plans for another 250 audits for the 15-month period from October 1994 through December 1995. To strengthen its audit resolution controls, on July 20, 1993, RTC issued Circular 1250.2 Management Decision Process and Audit Followup. This directive established a new audit follow-up system for all internal and external reviews and other evaluations of RTC organizations, programs, operations, and contractors. The management decision and audit follow-up process encompasses all efforts taken by RTC to address findings, implement accepted recommendations, and verify completion of corrective actions. RTC’s process incorporates, as appropriate, the concepts of Office of Management and Budget Circular A-50 on audit follow-up, although, as a mixed-ownership government corporation, RTC is not required to follow this circular. The audit follow-up system RTC has installed requires it to maintain records on the status of audit reports and associated recommendations, track management decisions and final actions, establish accounting controls over amounts due RTC from contractors as a result of costs disallowed by management, and provide periodic reports to RTC senior management and the Oversight Board. The audit follow-up directive states that RTC managers at all levels will ensure completion of corrective actions and submission of required supporting documentation in a timely manner. Those managers responsible for taking corrective actions are required to complete and sign an “Audit Follow-up Action Certification Statement” certifying that all necessary corrective actions have been taken and all necessary documentation has been obtained. In March 1993, when the 9-point plan was announced, RTC did not know the total number of audit recommendations that were still open, from all sources, that had to be addressed. Since then, RTC has placed a high priority on identifying and tracking GAO and IG audit recommendations and corrective actions. During 1994, RTC expanded its focus to include OCOS recommendations resulting from OCOS’ contract audits. As of December 17, 1993, when the RTC Completion Act became law, RTC data indicated that it had completed about 95 percent (1,438 of 1,511) of the actions to implement GAO and IG audit recommendations. This percentage does not include actions taken on OCOS recommendations because, at the time, RTC was not tracking these actions. However, during 1994, RTC expanded the scope of its audit follow-up system to include OCOS findings, recommendations, and planned corrective actions. As of January 23, 1995, the percentage of completed corrective actions to implement GAO, IG, and OCOS audit recommendations was about 76 percent (3,485 of 4,587). This decrease is due primarily to the substantial increase in the number of audit reports issued by the IG and OCOS during 1993 and 1994. Table III.1 shows the status of corrective actions on GAO, IG, and OCOS recommendations, as of January 23, 1995. The data in table III.1 do not include audit recommendations for which a management decision has not been made. RTC refers to these recommendations as “unresolved management decisions.” These are situations where RTC management has not yet committed to implementing a specific audit recommendation or agreed upon the specific actions to be taken. RTC’s policy is to make a final management decision on addressing an audit recommendation as soon as possible, but not later than 180 days after the date of the final audit report. Corrective actions are to begin as soon as practical once the final management decision is made. Figure III.3 summarizes the number and age of unresolved management decisions on GAO, IG, and OCOS recommendations as of January 23, 1995. Although there are a number of instances for which RTC management and the auditors have not agreed upon specific actions to be taken to implement audit recommendations, RTC has been working to reduce the number of unresolved management decisions. However, it still has a high number of recommendations for which RTC has not reached agreement with the auditors. As of January 23, 1995, the total number of unresolved management decisions was 703. This condition is primarily the result of 225 audit reports issued by OCOS in 1994. As shown in Figure III.3, nearly all of the unresolved management decisions that exceed RTC’s goal of 180 days, as of January 23, 1995, were on OCOS recommendations (234 of 254). Our analysis showed that 86 of these recommendations had been unresolved for over 540 days, or 3 times RTC’s goal. The oldest were two recommendations from a report issued November 14, 1991, that had been unresolved for 1,166 days. GAO’s recommendation tracking system differs from RTC’s system. GAO’s system tracks recommendations closed while RTC’s system tracks corrective actions completed. As of January 23, 1995, GAO’s tracking system showed that 88 of 120 (73 percent) of the recommendations that we have made to RTC since January 1990 were closed. Thirty-two (27 percent) of our recommendations were still open. These recommendations are listed in appendix VII. Figure III.4 shows the status of GAO recommendations as of January 23, 1995. Open—action in process (23) 8% Open—management agreement not reached (9) 8% Closed—no action intended (5) or no longer applicable (5) Closed—action not fully responsive (12) Closed—action taken (66) Open recommendations (32) Closed recommendations (88) Audit reports issued by the IG and OCOS often include questioned costs associated with the activities they reviewed. None of GAO’s audit reports questioned specific costs. Table III.2 shows the status of IG and OCOS questioned costs, as of January 19, 1995. Of the $240 million of total questioned costs identified by the IG and OCOS, RTC management has agreed to pursue $85 million. Also, in taking action to address audit findings, RTC management identified an additional $23 million of questioned costs, which raises the total amount being pursued from $85 million to $108 million. Table III.3 shows the status of management’s pursuit of the questioned costs, as of January 19, 1995. In January 1995, RTC reported to the Oversight Board Audit Committee that it had recovered $55 million of the $240 million identified by the IG and OCOS as questioned costs. Reform 9 also requires RTC to notify the Oversight Board when no action is needed or appropriate in response to an audit recommendation. In such instances, RTC’s procedures require the CFO, on behalf of the CEO, to certify accordingly to the Oversight Board. RTC has reviewed all of its GAO and IG audit resolution actions since December 17, 1993. On November 16, 1994, the CFO informed the Oversight Board that RTC field office vice presidents and senior headquarters managers have determined and certified that in certain instances no action was required on 2 GAO and 57 IG recommendations. Such circumstances occurred, for example, when a property was sold, a former contractor was no longer in business, or the estimated cost of litigation or other recovery attempts would exceed the potential recovery amount. We concur with RTC’s decisions on the two GAO recommendations. In each case, we agreed that implementing the recommendation was not feasible. While RTC has completed actions to establish an audit follow-up system, RTC plans to continue monitoring audit resolution activities with this system to ensure that (1) as many recommendations as feasible are fully implemented prior to RTC’s termination and (2) any open recommendations, which are still valid at that time, such as those related to questioned contract costs, are transferred to FDIC for final action. RTC plans to focus special attention on recommendations in contract audit reports issued by OCOS and the IG in the final year of RTC operation. Requirements of the Reform: The reform requires RTC to appoint, within the Division of Legal Services, an AGC for Professional Liability. The AGC is to (1) direct the investigation, evaluation, and prosecution of all professional liability claims involving RTC and (2) supervise all legal, investigative, and other personnel and contractors involved in the litigation of such claims. Also, the AGC is required to semiannually submit to Congress a comprehensive litigation report on all civil actions in which RTC is a party that were initiated or pending during the period covered by the report and on other activities of the AGC. These reports are due on April 30 and October 31 of each year. By the time the RTC Completion Act became law, the position for an AGC for Professional Liability had already been established and filled. Subsequently, the AGC was given the responsibilities of the statutory position and actions were completed to implement the mandated organizational changes and fulfill the semiannual reporting requirements. RTC plans to continue monitoring the results of these actions to ensure that (1) a unified legal and investigative team is maintained and (2) the semiannual reports on the professional liability program are submitted to Congress as required. At the time that the act became law, RTC’s investigators and its attorneys were in two different organizational units. RTC’s AGC for Professional Liability believes that this reform’s intent is to ensure that RTC professional liability personnel, including investigators and attorneys, operate as a fully unified legal and investigative team, able to make decisions and recommendations on professional liability issues in a coordinated manner. RTC took its first formal step toward implementing these organizational changes when RTC’s General Counsel issued a memorandum dated March 25, 1994. The memorandum informed affected RTC staff that the reform required a unified management structure for the professional liability program and the incorporation of the Investigations Unit into the Legal Services Division. In May 1994, RTC’s Acting CEO and its General Counsel each signed an organization chart that showed the Office of Investigations to be a unit within the Division of Legal Services. During April, May, and June, a series of delegations of authority were issued to further implement the organizational changes. On July 18, 1994, a memorandum issued jointly by RTC’s AGC for Professional Liability and the Director of its Office of Investigations restated and redefined the roles and responsibilities of RTC’s Professional Liability Section and its Office of Investigations. These actions provided the framework for implementing the required changes. RTC plans to continue monitoring and evaluating the effectiveness of these organizational changes, and if additional actions are needed, they are to be taken in order to assure a complete unification of the legal and investigative team. On October 31, 1994, RTC submitted to Congress its second semiannual report for the period ending September 30, 1994. It contained information on initiated and pending civil actions, program achievements, and impediments to RTC’s ability to assert claims. In addition, the second semiannual report noted that “the managerial reforms required by the [RTC Completion] Act have been fully implemented.” Requirements of the Reform: This reform requires RTC to maintain an effective management information system capable of providing complete and current information to the extent that the provision of such information is appropriate and cost-effective. Secretary Bentsen’s March 1993 9-point plan included a reform that required RTC to improve its management information systems. At that time, RTC established three objectives to implement this reform: (1) improve the quality of data in its systems, (2) enhance information systems to support business needs, and (3) improve information provided to senior executives for decisionmaking. When the RTC Completion Act became law in December 1993, it included a similar reform that required RTC to maintain a management information system capable of providing complete and current information. To implement the act’s reform, RTC decided to address only the first two objectives that it initially established to address the reform under Secretary Bentsen’s plan. According to officials in DIRM, the third objective was dropped because RTC’s senior executives had not identified any information needs that would require systems’ modifications. RTC’s information systems remain critical to its efforts to manage and sell failed thrift assets and to FDIC’s task of assuming responsibility for any remaining RTC operations after December 31, 1995. In the past, RTC’s information system problems included unclear or changing requirements, poor response time, difficulty of use, and inaccurate and incomplete data. Over the last 2 years, RTC has made many improvements. Its system requirements are now better defined, and it has completed all of its system development projects. In addition, it has modified its systems to improve response times and make them easier to use. Accurate and complete information is still critical to RTC’s ability to efficiently and effectively dispose of assets. Poor information can increase the uncertainty faced by investors and, therefore, may reduce the prices that they are willing to pay for RTC’s assets. In June 1994, RTC completed initial data quality action plans for its 17 critical information systems. RTC uses these 17 systems to manage unsold assets, support financial transactions, and report on activities in which congressional oversight committees have had significant interest. A major component of RTC’s strategy to improve the quality of data in these systems is the use of computer software to identify problems such as missing or inconsistent data. While RTC is making progress in improving the quality of data in its systems, some data quality problems continue. On November 30, 1994, RTC had unsold real estate with a total book value of about $2 billion and unsold loans with a total book value of about $17 billion. RTC’s December 1994 internal reports showed that about 9 percent of unsold real estate records in the Real Estate Owned Management System (REOMS) had computer detectable errors, such as missing data, and about 19 percent had potential errors called warnings. For example, a large discrepancy between the book value and appraised value of an asset is called a warning. Warnings require follow up to determine whether the questionable data is correct. Also, RTC reports showed data quality improvements in the Central Loan Database (CLD), which includes information on loans and which is used to help develop loan sales initiatives. As of October 1994, the number of loan records with one or more computer detectable errors was about 19 percent compared to 57 percent when we analyzed the CLD data in December 1993. Although RTC is continuing its data quality program, RTC officials stated that further reductions in the percentage of computer detectable errors in both REOMS and CLD will be difficult to achieve, and errors may increase over the next several months. Officials gave three reasons for this view: (1) as asset sales occur, those assets for which there is deficient data are more likely to remain unsold and become an increasing percentage of the total loan portfolio or real estate property inventory; (2) much of the deficient data predates 1992 and is either unavailable or not easily accessible; and (3) as RTC reduces staffing levels, there will be fewer resources to research potential data errors. In addition, with fewer resources, it will become increasingly difficult to ensure that data errors are corrected. For these reasons, RTC is reassessing its efforts to improve the quality of data in the 17 major systems to help ensure that these efforts are properly focused on the data most critical to completing its mission. Its goal is to target critical data elements that, if not correct, could have a significant negative impact on the management of assets or the accuracy of information reported to oversight committees. This reassessment is expected to be completed by the end of March 1995. We agree with this approach in RTC’s final year of existence. The ultimate value of RTC’s efforts, however, depends on its ability to complete the implementation of the data quality action plans in time to affect current operations and on RTC’s ability to sustain improvements in data quality. By concentrating on the most critical data elements that are important to managing and selling assets, RTC should make the best use of its efforts. In addition, the benefits of better data should also help FDIC when it assumes responsibility for those assets that remain to be sold after RTC’s termination. Furthermore, RTC’s ongoing need for up-to-date, accurate, and complete corporate information is intensified by its need for information to support appropriate short-term business decisions, given that RTC’s responsibilities will soon transfer to FDIC. The Secretary of the Treasury, in his capacity as Chairman of the Oversight Board, will need similar information to carry out his responsibility for overseeing the transfer of RTC personnel and systems to FDIC, as required under section 7 of the RTC Completion Act. This section requires that in the transfer of RTC systems to FDIC, any RTC management, resolution, or asset disposition system that the Secretary of the Treasury determines, after considering the recommendations of the interagency RTC/FDIC transition task force, has benefited RTC shall be transferred to and used by FDIC. Also, section 7 requires that RTC personnel involved with these systems who are eligible for transfer to FDIC shall be transferred for continued employment. In this area, RTC has begun working with FDIC to identify systems and data that could be transferred to FDIC as it picks up responsibility for RTC’s activities. Under the second objective, RTC is selectively enhancing its primary information systems that support its financial operations and asset disposition activities. A total of 11 enhancements are under way or have been completed for 4 primary systems at an estimated cost of about $1 million. RTC expects this work to be completed by the end of March 1995. The systems to be enhanced are the (l) Control Totals Module, which is used to post summary asset related financial transactions to the general ledger; (2) Warranties and Representations Accounts Processing System, which tracks information for each asset sale that includes a representation and warranty; (3) Seller Financing System’s Commercial and Multi-Family module, which maintains data RTC needs to close on loans secured by commercial real estate properties; and (4) Asset Manager System, which is a cash management system that captures all income and expenses associated with RTC assets managed by Standard Asset Management and Disposition Agreement (SAMDA) contractors. Although RTC dropped the third objective—to improve information to senior executives for decisionmaking—RTC officials told us that the needs of senior executives continue to be considered as they implement the second objective of enhancing systems to support business needs and modify management information reports. Our interim report noted that we believed that the third objective was still relevant because of RTC’s ongoing need for up-to-date, accurate, and complete information, especially in light of the pending transition of RTC responsibilities to FDIC. In response to our concern, in November 1994, DIRM completed a survey to determine whether there were any unmet senior management reporting needs. The survey results showed that RTC managers were generally pleased with the information systems and the reports available to them. Requirements of the Reform: This reform requires RTC to maintain effective internal controls designed to prevent fraud, waste, and abuse; identify any such activity should it occur; and promptly correct any such activity. On March 27, 1992, RTC issued Circular 1250.l, Internal Control Systems, that established its internal control program and requires managers to (1) identify activities or functions (assessable units) subject to risk; (2) conduct an assessment and rate the susceptibility of the function or activity to risk (vulnerability assessment); (3) schedule high-risk functions for annual examination (management control plan); (4) conduct detailed examination (internal control review) of the function to determine if internal controls and procedures are current, adequate, and cost effective; and (5) develop and implement corrective actions to resolve deficiencies and strengthen controls. Due to the high cost of resolutions and the volume of the assets under its control, RTC needs a strong internal control structure to protect against loss and provide accurate reporting. To address this need, RTC has implemented procedures to assess the effectiveness of its internal controls, to report the results of that assessment, and to track the status of weaknesses identified by the internal process, as well as those identified by GAO and RTC’s IG. RTC also trained more than 1,000 managers and senior personnel in the concepts of RTC’s internal control system and the new audit follow-up procedures. On March 31, 1994, RTC issued its third annual report on its system of internal controls as of December 31, 1993. RTC reported that during 1993 it had stepped up its efforts to correct internal control deficiencies in all of its high-risk areas. Specifically, it reported that additional staff and contractor support resources were acquired and dedicated to correcting previously identified material weaknesses and nonconformances, increasing contractor oversight, and completing development and implementation of needed information systems and information system modifications. The report identified five high-risk areas in its operations. These areas were: (1) contracting systems/systems oversight; (2) accounting, financial management and reporting, and operations; (3) asset management and disposition; (4) information systems management; and (5) legal services. RTC stated in the report that during 1993 it had completed 191 of the 223 actions planned to correct material weaknesses and material nonconformances, which had been identified in 1993 and prior years, as shown in table III.4. RTC expects to complete planned actions on the remaining 32 material weaknesses and material noncomformances during 1994. We tested these results to determine whether the actions indicated as completed had actually been accomplished. We randomly selected 50 of the 191 actions RTC reported it had completed during 1993. RTC provided documentary evidence for 44 of the 50 actions showing that the planned actions had been completed. For the other six actions, RTC did not have adequate supporting documentation in its files, although we have no evidence that indicates that the actions were not completed. Furthermore, on the basis of work done and documentation gathered on other assignments, we confirmed the completion of 27 additional planned actions not included above. Also, our work showed that one action, which RTC reported as completed had not corrected the targeted internal control weakness. RTC reported that, as of December 1993, suspense items were being cleared within 60 days. However, although RTC’s clearance of suspense items had improved, our 1993 financial audit work showed that cash items were not always posted within 60 days. Subsequently, RTC improved its performance. Current RTC reports show that as of November 1994, 97 percent of the items placed in suspense are being posted within the 60-day goal. Requirements of the Reform: Under this reform, the failure to fill any positions established by section 21A of the Federal Home Loan Bank Act (12 U.S.C. 1441a) or any vacancy in any such positions, is to be treated as a failure to comply with the requirements of the management reforms. RTC is required to ensure that any vacancies in these senior level positions are filled. If additional RTC funding in excess of $10 billion is needed, the Secretary of the Treasury must certify that RTC has taken action necessary to comply with the requirements of the management reforms or is making adequate progress towards full compliance. By appointing individuals to the positions identified in section 21A of the Federal Home Loan Bank Act, RTC has fulfilled the initial requirements of this reform. However, RTC officials recognize—and we agree—that oversight must be maintained so that if a vacancy occurs in any of these positions, appropriate steps can be taken to quickly appoint replacements. Through December 31, 1994, the positions required by this reform remained filled. Requirements of the Reform: This reform requires RTC to include in its annual report an itemization of specific expenditures during the year covered by the report. Also, the annual report is to disclose salaries and other compensation paid during the year to directors and senior executive officers at any thrift for which RTC was appointed conservator or receiver. As part of its 1993 annual report, which was issued in September 1994, RTC included information on (1) the failed thrifts resolved during 1993 and the amount of loss funds used for each resolution transaction and (2) the salaries and other compensation paid to senior executive officers at all the thrifts that were in RTC’s conservatorship program during 1993. The report showed that no compensation was paid to directors of thrifts in conservatorship because RTC did not retain any of the directors. Also, RTC did not appoint new directors for these thrifts. Furthermore, thrifts in receivership do not have directors or officers and therefore, no disclosure of salaries and other compensation is required. RTC plans to ensure that similar information is included in its 1994 and 1995 annual reports. Requirements of the Reform: This reform requires RTC to ensure that every RTC regional office has a client responsiveness unit responsible to the RTC’s ombudsman. According to the RTC ombudsman, the client responsiveness program was established in July 1992. The purpose of the program was to (1) ensure that RTC employees responded to inquiries, complaints, and requests for general assistance from the public—whom RTC generally refers to as clients—in a timely and accurate manner and (2) provide resolutions to such inquiries, complaints, and requests that would be equitable to both the client and RTC. To implement the reform, RTC updated its policy directive on the client responsiveness program. In August 1994, RTC’s Deputy and Acting CEO distributed the updated directive to all RTC employees. According to the RTC ombudsman, this action was taken to reinforce the importance of the program and ensure that all RTC employees were aware of the standardized procedures for responding to client inquiries and complaints. In distributing the updated directive, RTC’s Deputy and Acting CEO also highlighted how the program was designed to ensure that RTC would be as responsive as possible to the public, in keeping with the recommendations of the National Performance Review that identified ways in which government agencies can improve their methods for dealing with and responding to the public. To track its workload under the client responsiveness program, RTC set up three categories of contacts it receives: (1) general assistance, which includes requests that can be resolved and answered quickly and do not require research or consultation with other RTC personnel, such as requests for directions to an RTC office; (2) inquiries, which include questions or requests for assistance from clients that take more time to resolve than do general assistance requests because they require some research or consultation with other RTC personnel, such as questions about the disposition of a specific asset; and (3) complaints, which involve clients who are dissatisfied or have expressed grievances in dealing with RTC. According to RTC, during the period June 1994 through December 1994, RTC received a total of 19,300 general assistance requests, inquiries, and complaints. Figure III.5 shows a percentage breakdown of these three categories of client contacts that RTC received during this period. 9% Complaints (1,822) General assistance requests (6,278) Inquiries (11,200) The RTC ombudsman oversees the client responsiveness program by requiring that monthly reports be prepared to provide information on the extent of client responsiveness activities in RTC headquarters and the six field offices. The reports include such data as the number of general assistance requests, inquiries, and complaints received and the number of inquiries and complaints resolved. Because general assistance requests are resolved in a single telephone contact, RTC does not maintain statistics on the time it takes to resolve such requests. However, because inquiries and complaints require additional research, RTC keeps track of the length of time it takes to resolve them. The updated client responsiveness directive dated August 5, 1994, included a time standard of 15 business or working days for resolving clients’ inquiries and complaints. In the monthly reports, RTC includes data on the average time it takes to resolve inquiries and complaints. This figure varies from month to month, depending on the number of inquiries and complaints received and resolved and their complexity. Most recently, in December 1994, the average resolution time for inquiries and complaints was about 12 business days. According to the RTC ombudsman, complaints generally comprise the smallest percentage of the three types of client contacts that RTC receives. During the period June 1994 through December 1994, the complaints most often involved client concerns about (1) information on RTC-controlled assets, (2) performance by RTC contractors, and (3) communications with RTC. Since the RTC Completion Act became law, RTC has ensured that all its field offices had client responsiveness units. Also, the RTC ombudsman has provided policy guidance and direction to the managers of the client responsiveness departments in the six field offices and ensured that the program is administered consistently. Requirements of the Reform: This reform established requirements concerning how RTC marketed and justified the disposition of real property. Specifically, RTC is required to market any undivided or controlling interest in real property assets on an individual basis (excluding assets transferred in purchase and assumption transactions and assets transferred to a new thrift organized by RTC under section 11(d)(2)(F) of the Federal Deposit Insurance Act) for at least 120 days before making these assets available for sale or other disposition on a portfolio basis or otherwise included in a multiasset sales initiative. Also, RTC is required to publish regulations that (1) implement these marketing requirements and (2) justify in writing the inclusion of real property assets in a portfolio or other multiasset sales initiative after the 120-day marketing period. On April 15, 1993, RTC’s Vice President for Asset Management and Sales issued a memorandum to RTC senior managers and SAMDA contractors stating that all real property assets must be marketed for at least 120 days before being offered in multiasset sales initiatives, such as portfolio sales. Auctions of single real property assets were exempt from this requirement. The memorandum further stated that real property assets remaining unsold after 120 days of active marketing may be included in multiasset sales initiatives only after meeting certain requirements. Specifically, RTC asset specialists were required to substantiate that including these real property assets in multiasset sales initiatives would result in a greater return to RTC than if the assets were sold individually. These justifications would be included in the specialist’s case memorandum requesting approval to dispose of assets on a portfolio basis. In November 1994, RTC published in the Federal Register a final rule adopting the policies and procedures for implementing the requirements of this reform. However, RTC field office officials believe that the reform’s requirements had minimal effect on their operations because inventories of real property assets have decreased, remaining real property assets generally did not meet the criteria established by the reform, and they have been successfully selling real property assets individually through sealed bids and auctions and believe that they are getting a good return. According to RTC officials, shortly after the RTC Completion Act became law, efforts were initiated to ensure implementation of the reform’s requirements. For example, training on the reform’s requirements was provided to RTC field office officials who had been delegated specific authority to approve multiasset sales initiatives. Also, as part of its internal control reviews, RTC monitors the field offices’ management of remaining asset inventories and sales initiatives to ensure compliance with the reform’s requirements. Requirements of the Reform: This reform establishes various requirements for the disposition of real property and nonperforming real estate loan assets. Specifically, before selling such assets, RTC must assign the responsibility for the management and disposition of such assets to a qualified person or entity. This responsibility includes (1) analyzing each asset and considering alternative disposition strategies, (2) developing a written management and disposition plan for the asset, and (3) implementing this plan for a reasonable period of time. However, the asset may be included in a bulk transaction if RTC determines in writing that this method of asset disposition would maximize net recovery to RTC while providing opportunity for broad participation by qualified bidders, including MWOBs. Also, the reform exempted the following assets from these requirements: (1) assets transferred in purchase and assumption transactions; (2) assets transferred to a new institution organized by RTC under section 11(d)(2)(F) of the Federal Deposit Insurance Act; (3) nonperforming real estate loan assets with a book value of not more than $1 million; and (4) real property assets with a book value of not more than $400,000. In addition, nonperforming real estate loan assets and real property assets above these dollar values could be exempted from the reform’s requirements if RTC determines in writing that other disposition methods would bring RTC a greater return. In February 1994, RTC issued a memorandum that informed staff of the requirements to prepare the appropriate written documents to justify the sales of certain nonperforming real estate loans and other real property. In November 1994, RTC issued in the Federal Register a final rule that adopted the policies and procedures for implementing the reform’s requirements. RTC monitors the implementation of the reform’s requirements through various methods, including contractor oversight, the internal control review process, and program compliance reviews. Requirements of the Reform: The requirements of this reform are as follows: (1) subject to the least-cost test in section 13(c)(4) of the Federal Deposit Insurance Act, RTC is to give preference to offers from minority bidders for acquiring thrifts located in PMNs; (2) any minority bidder is to be eligible for capital assistance under the minority interim capital assistance program, provided that granting the assistance is consistent with the least-cost test; (3) in connection with the acquisition of a thrift in a PMN by a minority acquirer, RTC is permitted to transfer performing assets from other failed thrifts in addition to the performing assets of the thrift being acquired; and (4) in connection with the acquisition of a thrift in a PMN by a minority acquirer, the acquirer is to have first priority in RTC’s disposition of the performing assets. RTC has issued several policies and procedures to implement this reform. In July 1994, RTC published a final rule in the Federal Register that defines “predominantly minority neighborhood” as any U.S. Postal ZIP code area in which 50 percent or more of the residents are minorities according to the most recent Census data. However, RTC has the discretion to use other data that may indicate more accurate neighborhood boundaries. Also, RTC issued a directive that summarized its minority preference resolutions program in three parts. First, RTC will offer a failed minority-owned thrift to investors of the same ethnic group as the failed minority-owned thrift before offering it to others. Second, bidding preferences will be given to offers from minority-owned financial institutions to acquire any failed thrift whose home office is located in a PMN or has 50 percent or more of its offices in PMNs provided this preference results in the least cost to RTC. Moreover, if a minority bidder is within 10 percent of the highest bid made by the nonminority bidder, then a “best and final” round of bidding will take place between the best minority and nonminority bids. RTC also may provide to a winning minority bidder (1) interim capital assistance of up to two-thirds of the required regulatory capital, (2) the option to purchase performing loans (1-4 family mortgages), and (3) branch facilities located in a PMN owned by RTC on a rent free basis for 5 years. Third, RTC will reoffer a failed thrift or its branches to minority-owned financial institutions and make interim capital assistance available if no other acceptable bid not dependent on interim capital assistance is received. In addition, RTC made significant changes to its minority preference resolutions program. For example, RTC announced that expanded opportunities and incentives would be available for minorities to purchase failed financial institutions. RTC informed nonminority acquirers of offices located in PMNs of minority interest in acquiring these offices and encouraged them to sell such branches to minority acquirers, particularly in cases where the nonminority acquirer planned to close the office. Under this approach, RTC assistance will also be made available to minority acquirers as if the minority acquirer had originally purchased the office. Furthermore, RTC announced a pilot initiative for the sale of RTC’s 10 remaining thrifts in PMNs. Under the pilot initiative, RTC plans to permit the highest minority bidder to match the highest nonminority bid, provided that the minority bid is within 10 percent of the highest premium. As of December 31, 1994, RTC had resolved all but 1 of the 21 thrifts that had offices in PMNs. Collectively, the 21 thrifts had 58 PMN offices. Of these offices, twelve minority bidders acquired 36 percent (21 of 58). As part of these resolutions, almost $20 million in capital assistance was provided to these acquirers. In addition, rent free offices and the option to purchase assets at market price were also made available. According to RTC, for 4 thrifts, no minority bids were received, and for 5 thrifts, the minority bid was not within the 10 percent of the majority bid. As part of RTC’s minority preference resolutions program, minority acquirers of thrifts in PMNs are provided opportunities to purchase performing 1-4 family mortgage loans. As of February 1, 1995, a total of about $207 million in loans had been sold through this program. In addition, two transactions were still pending at that time. Seven acquirers have purchased loans, one additional acquirer has a purchase that is pending, and two acquirers did not exercise their purchase options. As required by the RTC Completion Act, we are reviewing RTC’s valuation of loans offered through this program and will report on the results of our review later in 1995. Requirements of the Reform: This reform included the following requirements: (1) RTC is required to revise the procedure for reviewing and qualifying applicants for eligibility for future basic ordering agreements to ensure that small businesses, minorities, and women are not inadvertently excluded from eligibility for such agreements and (2) to ensure maximum participation by MWOBs, RTC shall review all lists of eligible contractors and prescribe regulations and procedures. In May 1994, RTC issued a policy memorandum to all Minority and Women’s Program Directors that is designed to ensure a full and thorough review of source lists for prospective RTC contract solicitations. RTC has also included these requirements in the CPPM revision 7, dated May 16, 1994. In addition, on February 8, 1995, RTC published in the Federal Register its final rule entitled Minority- and Women-Owned Business and Law Firm Program that, among other things, defines procedures for ensuring that MWOBs and MWOLFs are not excluded from eligibility for task orders and other contracting activities. Although the issuance of these documents fulfills the requirements of the reform, RTC plans to monitor contracting activities to ensure that the procedures are fully implemented on any new contracts awarded. Requirements of the Reform: This reform requires RTC to (1) maintain procedures and uniform standards for entering into contracts with private contractors, and for overseeing contractors’ and subcontractors’ performance and their compliance with the terms of the contracts and applicable regulations, orders, policies, and guidelines, so that RTC’s operations are carried out in as efficient and economical a manner as practicable; (2) commit sufficient resources, including personnel, to contract oversight and the enforcement of all laws, regulations, orders, policies, and standards applicable to RTC contracts; and (3) maintain uniform procurement guidelines for basic goods and administrative services to prevent the acquisition of such goods and services at widely different prices. Before the RTC Completion Act became law, RTC had already issued the CPPM to provide uniform standards and procedures that RTC staff must follow in awarding all RTC contracts for other than legal services. Also, RTC had committed additional resources to contractor oversight. In May 1993, the RTC Executive Committee approved 214 additional positions for contracting issues. These positions were added to provide greater emphasis on contracting, contractor oversight, internal controls, and other related functions to implement Secretary Bentsen’s 9-point plan for RTC. Concerning uniform standards for the oversight of RTC contractors and subcontractors, chapter 10 of the CPPM provides detailed requirements for RTC contractor oversight. At the time the contract is awarded, RTC staff are required to complete a contract administration plan to ensure that they have a common understanding of both RTC’s and the contractor’s obligations under the contract. Also, a June 1993 reorganization of RTC’s contracting program placed additional emphasis on contract oversight issues. For subcontractor oversight, RTC has always required that its contractors, not RTC employees, monitor the work of subcontractors. According to RTC contracting officials, if subcontracting is a significant portion of a contract, plans for monitoring the subcontractors should be included in the contract administration plan. RTC officials told us that they believed the act did not require a revision to its subcontractor oversight policy. In February 1994, RTC’s Office of General Counsel developed a program for warranting Legal Division employees to execute contracts for legal services and take related actions on behalf of RTC. The goal of the program is to promote quality performance and effective contracting by establishing uniform procedures and minimum standards for certification, maintenance, and termination of warrants issued to “Legal Officers.” In the February 7, 1994, Federal Register, RTC notified the public that only legal officers who are issued a warrant can execute contracts for legal services on behalf of RTC. In April 1994, RTC issued procedures to implement our recommendation that SAMDA contractors be required to regularly report on steps taken to oversee their subcontractors. In our interim report, we observed that by ensuring the full implementation of these procedures, RTC could help reduce the vulnerability of its property management subcontractors to potential fraud, waste, and mismanagement. RTC has issued some additional procedures for the oversight of property management subcontractors and plans to continue reviewing its contractor oversight activities to identify areas for improvement. In addition, because many of its contracts are being completed, RTC has increased its focus on another aspect of contract administration—contract closing. After the terms of a contract have been accomplished, it needs to be closed out. To do so, contracting officers are required by RTC’s CPPM to determine, among other things, that (1) all deliverables, including reports, have been received by RTC and accepted; (2) final payment has been made to the contractor; (3) all collections of funds due to RTC have been completed; (4) all financial documents are in the file; (5) all RTC property has been returned and accounted for; and (6) all RTC files have been returned. According to RTC estimates, at least 12,000 prime contracts issued before December 31, 1992, with estimated fees of about $2.8 billion, still need to be closed. In April 1994, we discussed this matter with RTC officials who agreed that to help protect RTC’s interests, the contract close-out process should be done as soon as possible after contract completion. Subsequently, RTC stepped up its actions to ensure that contracts are closed. In June 1994, RTC revised its contracting information system to include additional information about contract closings. Further, the RTC Office of Contracts and OCOS established a joint program to identify whether certain contracts with fees in excess of $500,000 should be audited. During its last year of operation, RTC plans to continue its efforts to ensure that all contracts are properly closed. Further, to the extent that contracts remain open at RTC’s termination, RTC is working to help ensure that FDIC will be prepared to complete this important task. In addition, to prevent the acquisition of basic goods and administrative services at widely different prices, RTC issued an interim policy revision to its CPPM on October 7, 1994. The revision defines goods and administrative services as including—but not limited to—the purchase of furniture, fixtures, and equipment; publishing and printing; computer equipment and services; and day-to-day services, such as the procurement of supplies and the employment of security guards. The revision is applicable to all purchases of goods and administrative services with fees greater than $100,000. Under this revision, the contracting officer is to develop a written price history for procurements of similar services. If the proposed contract price is within 10 percent of the price history for similar services, the proposed contract price would satisfy the requirement of the CPPM. This change was formally incorporated into revision 8 to the CPPM, which was issued on February 15, 1995. Requirements of the Reform: This reform requires RTC to establish guidelines for achieving the goal of a reasonably even distribution of contracts awarded to various MWOB and MWOLF subgroups whose total number of certified contractors comprise not less than 5 percent of all MWOB and MWOLF certified contractors. These guidelines may reflect the regional and local geographic distributions of minority subgroups. The distribution of contracts should not be accomplished at the expense of any eligible MWOB or MWOLF in any subgroup that falls below the 5-percent threshold in any region or locality. As discussed in our interim report, RTC planned to issue written guidelines that were designed to establish procedures for ensuring that a reasonably even distribution of contracts and commensurate fees are awarded to each minority subgroup. In developing the guidelines, an analysis of the level of contracting activity to MWOBs and MWOLFs by subgroups for each field office was completed in February 1994. This analysis included the identification and assessment of the ethnic and gender representation among the MWOB and MWOLF contractors and the actual level of contract awards to each group on a region-by-region basis. Headquarters is to provide ongoing technical assistance to the field offices in their efforts to increase participation levels in any subgroup where the distribution of contracts falls below the 5-percent threshold within any region. Initially, RTC had planned to issue these guidelines by the end of July 1994. Although final written guidelines have not yet been issued to the field offices, in November 1994, RTC headquarters provided draft guidelines to these offices. The draft guidelines were intended to provide RTC field offices with information on how they should be working to achieve parity in their contracting activities. RTC’s objectives are to ensure that the number of contracts awarded and the amount of fees paid to minority subgroups equals the subgroups’ percentage of representation in RTC’s national certified database. RTC agrees that although draft guidelines for achieving contract parity have been provided to RTC field offices, the status of this reform should remain work in progress until the guidelines have been finalized. According to an RTC official, the guidelines were not issued in July 1994 as initially planned mainly because work was still being done to issue the final rule on Minority- and Women-Owned Business and Law Firm Program that would implement reforms 6, 16, and 18. Since the final rule was published on February 8, 1995, RTC is preparing the contract parity guidelines, which are scheduled to be issued by the end of March 1995. After the guidelines have been finalized and distributed to RTC field offices, RTC plans to monitor contracts awarded and fees paid to ensure that the guidelines are fully implemented. Requirements of the Reform: This reform requires RTC to prescribe regulations that provide sanctions, including contract penalties and suspensions, for violations by contractors of requirements relating to subcontractors and joint ventures. RTC developed specific sanctions for violations of MWOB and MWOLF subcontracting and joint venture requirements that were incorporated in the final rule entitled Minority- and Women-Owned Business and Law Firm Program published in the Federal Register on February 8, 1995. These sanctions, which include contract termination, suspension, or exclusion from the RTC contracting program, have been incorporated in the CPPM. In addition, RTC officials told us that all standard contract agreements have been modified to include these sanctions. RTC plans to monitor contractor performance to ensure that the sanctions are imposed when appropriate. Requirements of the Reform: This reform includes the following requirements: (1) RTC is to establish reasonable goals for contractors to subcontract with MWOBs and MWOLFs, and (2) with certain exceptions, RTC may not contract for services, including legal services, under which the contractor would receive fees or other compensation equal to or greater than $500,000, unless RTC requires the contractor to subcontract with MWOBs and MWOLFs and pay fees or other compensation to the subcontractor in an amount commensurate with the amount of services it provided. This reform allows RTC to exclude a contract from these requirements if the CEO determines in writing that the subcontracting requirement would substantially increase the cost of contract performance or undermine the contractor’s ability to perform its obligations. The reform also permitted RTC to grant waivers of these requirements to contractors who certify that no eligible MWOBs are available to enter into subcontracts and provide an explanation for the basis of such a determination. Also, any granting of such a waiver shall be made in writing by RTC’s CEO. Finally, the reform required RTC to report to Congress a description of such exceptions and waivers granted during each quarter. On February 8, 1995, RTC published in the Federal Register its final rule entitled Minority- and Women-Owned Business and Law Firm Program, which established required MWOB and MWOLF subcontracting goals. Specifically, RTC required that for all contracts with fees of $500,000 or more, MWOB/MWOLF subcontracting be 10 percent for non-MWOB/MWOLF contractors and joint ventures with less than 50-percent MWOB/MWOLF participation, and 5 percent for MWOB/MWOLF firms or joint ventures with more than 50-percent MWOB/MWOLF participation. Although the required subcontracting goals have been established, RTC plans to monitor the awarded contracts to ensure that the goals are achieved. Requirements of the Reform: This reform requires that: (1) in awarding any contract subject to the competitive bidding process, RTC is to apply competitive bidding procedures that are no less stringent than those in effect on the date of the enactment of the RTC Completion Act and (2) nothing in this act, or any other provision of law, shall supersede RTC’s primary duty of minimizing costs to the taxpayer and maximizing the total return to the government. At the time of our interim report, RTC had taken preliminary action to implement the first of the two sections of this reform. After the act became law, RTC revised the CPPM to incorporate the reform’s competitive bidding procedures requirement as a policy. RTC officials said that revision 7 of RTC’s CPPM was carefully reviewed to ensure compliance with this reform. They also said that as contracting policies are updated, headquarters staff will ensure that RTC is in compliance with the requirement. In February 1995, RTC issued revision 8 to its CPPM, which included the second section of the reform requiring that no provision of the RTC Completion Act or any other provision of law would supersede RTC’s primary duty of minimizing costs to the taxpayer and maximizing the total return to the government. Also, RTC’s Director of Contracting Policy and Major Dispute Resolution stated that he has emphasized compliance with this requirement during 1994 training sessions for RTC contracting staff. The Director of RTC’s Office of Contracts is responsible for ensuring that all future contracting policies and procedures comply with the reform’s requirements. RTC plans to monitor the implementation of this reform through the Office of the Vice President for Contracts, Oversight and Evaluation. Requirements of the Reform: Under this reform, to improve the management of legal services, RTC is required to utilize staff counsel when such utilization would provide the same level of quality in legal services as the use of outside counsel at the same or a lower estimated cost. Also, RTC may only employ outside counsel (1) if the use of outside counsel would provide the most practicable, efficient, and cost effective resolution to the action and (2) under a negotiated fee, contingent fee, or competitively bid fee agreement. RTC has taken the actions necessary for achieving this reform. It has developed a policy and procedures for the selection and engagement of outside counsel and issued guidelines for determining whether the engagement of outside counsel for particular matters is warranted under the requirements of the RTC Completion Act. However, as workload and staffing levels change, RTC plans to closely monitor the effects of its changes to policy and procedures to ensure that it continues to seek the most practicable, efficient, and cost-effective resolution to legal matters. On July 8, 1994, RTC’s General Counsel issued a memorandum distributing the newly-developed Policy and Procedures for the Selection and Engagement of Outside Counsel. The General Counsel said in that memorandum that the new guidance was effective for all new engagements, modifications, and terminations after July 8, 1994. The policy statement states that the Division of Legal Services will use its in-house staff when it can to provide the same level of quality legal services that outside counsel would provide at the same or a lower estimated cost. Further, it adds that the Division will only employ outside counsel when such use provides the most practicable, efficient, and cost-effective alternative. The accompanying procedures require that engagements of outside counsel be based upon a determination that each of the elements of practicability, efficiency, and cost effectiveness will be met, and that the oversight attorney for each engagement document the reasons for the engagement of outside counsel. Some RTC officials expressed their belief that the current policies and procedures have resulted in a decrease in RTC’s use of outside counsel, with RTC’s in-house attorneys doing more of the legal work related to matters such as bankruptcies. The July 1994 policy statement further states that RTC’s Division of Legal Services will only employ outside counsel under a negotiated, contingent, or competitively bid fee arrangement. The new procedures permit four selection methods for engaging outside counsel and provide guidance on when each of the four methods should be used. The procedures also describe the contracting authorities and responsibilities of various levels of RTC Legal Division officials and of the Legal Services Committees that must approve legal contracting decisions in each RTC office. On August 26, 1994, RTC’s General Counsel issued Guidelines for the Handling of Matters Within RTC’s Legal Division and the Engagement of Outside Counsel, which are meant to be used in conjunction with the July 8, 1994, policy and procedures. These guidelines describe eight general factors, including availability of staff resources, to be considered in determining whether particular matters should be handled by attorneys within the Legal Division (in-house) or referred to outside counsel. In addition, the guidelines recognize that other factors may be relevant to determining whether the use of RTC attorneys or the engagement of outside counsel will provide the most practicable, efficient, and cost-effective resolution of a matter. The August 26, 1994, guidelines also contain a listing of several categories of matters that “should generally be handled in-house unless the caseload and staffing considerations in a particular office mitigate to the contrary.” The guidelines caution that because workload and staffing levels will vary in each RTC office, senior legal management in each office will have to reassess, from time to time, the practicality of handling, or continuing to handle, certain types of matters in-house. The guidelines also direct the senior legal management in each office to “seek to identify regularly additional categories of matters appropriate for in-house handling,” and they require that senior legal management monitor compliance with the guidelines with respect to documenting the reasons for hiring outside counsel. Also, RTC has established a legal services contracting officer warrant program. This program is discussed under reform 7. Requirements of the Reform: This reform requires the Oversight Board to establish and maintain an audit committee whose duties include (1) monitoring RTC’s internal controls; (2) monitoring the audit findings and recommendations of RTC’s IG, the Comptroller General of the United States, and RTC’s response to the findings and recommendations; (3) maintaining a close working relationship with RTC’s IG and the Comptroller General; (4) regularly reporting any of its findings and recommendations to RTC and the Oversight Board; and (5) monitoring RTC’s financial operations and reporting any incipient problem identified to RTC and the Oversight Board. The Oversight Board established the audit committee on September 20, 1994. Three members have been appointed to the committee. On November 10, 1994, the Oversight Board adopted a charter for the audit committee that defined its duties and responsibilities. The committee has the following duties: monitor RTC’s internal controls; monitor the audit findings and recommendations of RTC’s IG and GAO, as well as RTC’s responses to the findings and recommendations; maintain a close working relationship with the IG and the Comptroller regularly report findings and recommendations to RTC and the Oversight monitor RTC’s financial operations and report any incipient problems identified to RTC and the Oversight Board; and meet at least quarterly. Since the establishment of its charter, the audit committee has held two meetings, one in November 1994 and one in January 1995. At the November meeting, the chairman identified three areas for priority attention by the committee: (1) ensuring that RTC and the IG continue to have an active audit program; (2) reviewing transition issues, such as asset valuation, staffing, and reserves; and (3) evaluating RTC procedures as they are changed during RTC’s final year of operation. FDIC and RTC should work together to plan for the future of the professional liability program. This planning needs to address how FDIC will assume responsibility for the RTC professional liability cases. Analyze and address current and future operational and staffing needs of the professional liability program. Keep professional liability attorneys informed of agencies’ plans and decisions concerning the professional liability program to help decrease the level of uncertainty surrounding the program. Schedule periodic management reviews of the loan portfolio sales process to ensure that National Sales Center and field office staff are setting reserve prices based on the characteristics of the loan portfolios offered for sale. Schedule periodic management reviews to ensure that bid packages contain accurate and complete information about the loan portfolios being sold. Schedule periodic management reviews to ensure that bidding results are being provided to all investors as quickly as possible after the closing of each individual transaction without placing the transaction in jeopardy. Schedule periodic management reviews to ensure that investors’ post-closing problems are responded to promptly. Schedule periodic management reviews to ensure that loan portfolio sales data are collected, summarized, and analyzed consistently and comprehensively. Schedule periodic management reviews to ensure that the loan portfolio sales database provides the information necessary to evaluate RTC progress in achieving program goals. Change RTC’s SAMDA performance reviews by completing them more than once a year and during those reviews include specific steps focused on the SAMDA contractor’s efforts to oversee their property management contractors or require the SAMDAs to regularly report on steps taken to oversee their property management contractors. Reemphasize the importance of supervision and assessment of staff performance and ensure that the internal control supervision standard is followed. Require that sufficient staff are assigned to manage and administer contracts and ensure management continuity throughout the full term of contracts. Direct the Corporation staff to monitor implementation and progress of the corrective actions related to the weaknesses we identified in general controls over some of the Corporation’s computerized information systems, posting securitization-related wire receipts, and reconciliations of receiverships’ asset balances to detailed asset records. Require SAMDA contract oversight managers to work with the SAMDA contractors to help them prepare, summarize, and reconcile their asset activity records before the final OCOS reviews. Periodically review the subrogated claims receivable balances to identify situations in which actual recoveries exceed the recorded receivable balances prior to receipt of the final dividend. In these situations, we suggest that the Corporation immediately record the interest income for the excess recoveries. Monitor the logs prepared by the field offices to ensure that they are submitted to the Corporate Accounting Unit in a timely manner and contain all the information needed for the reconciliation process for account 060109, Non-cash Recoveries on Subrogated Claims. Temporarily reopen the general ledgers for the terminated receiverships and correct misclassifications. Establish procedures to require that all general ledger adjustments identified during the monthly reconciliation process be forwarded to the Financial Reporting Unit to ensure that all adjustments are considered in preparing the financial statements. Perform employment screening before hiring individuals and routinely do so for current employees, using reliable databases of individuals found responsible for institution failures. Develop reliable databases that will effectively identify individuals found culpable in institution failures. Share information systematically, enabling each (RTC and FDIC) to be aware of those individuals the other has found culpable in the failure of federally insured institutions. Ensure that personnel guidance is clear and appropriate regarding employees and prospective employees for whom the Corporation has made culpability determinations. Ensure that conservatorship employees who occupy positions with responsibilities for asset disposition—such as those performing loan workout functions—be included in the employment screening process. Ensure that adequate management controls are maintained over SAMDA contracts, particularly in view of the widespread asset and subcontractor locations that exist now. Use the results of these analyses as one of many factors to better manage assets and direct disposition efforts in order to increase net recoveries. Establish specific time frames for each multifamily property to comply with occupancy requirements, although an exemption should be provided when the failure to comply is caused by the law that prohibits displacing existing tenants. Ensure that complete information on the status of occupancy requirements is maintained. Determine if stiffer penalties are warranted to encourage property owners to comply with occupancy requirements. Ensure that all land use restriction agreements are accounted for, executed, and recorded. RTC/FDIC Transition Task Force consider the issues identified in report, especially the weaknesses in RTC compliance monitoring program for multifamily properties. Ensure that all loan servicing contracts require loan servicers to submit monthly loan status updates of data needed for marketing purposes to the CLD contractor. Ensure that information provided to investors on loan data diskettes or in imaged loan files is valid, complete, well documented, and in a format that meets investors’ needs. Janet M. Chapman Karl G. Neybert Marshall S. Picow Richard S. Schupbach The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Resolution Trust Corporation's (RTC) efforts to implement management reforms required under the RTC Completion Act, focusing on: (1) how RTC and the Thrift Depositor Protection Oversight Board are implementing the reforms; and (2) RTC progress toward achieving full compliance with the requirements. GAO found that: (1) RTC has completed actions on three reforms, and has made progress towards implementing the remaining 17 reforms; (2) RTC has taken action on six reforms involving RTC general managment functions; (3) RTC should monitor these reforms to ensure that appropriate future actions are taken when necessary; (4) in 1993, RTC created the Division of Minority and Women's programs, appointed a Chief Financial Officer, and created a client responsiveness unit in RTC field offices; (5) monitoring of RTC resolution and disposition activities is needed to ensure full compliance; (6) the Oversight Board has established an audit committee to monitor RTC audit activities; and (7) planned actions to enhance RTC information systems and develop draft guidelines to improve specific RTC contracting procedures have not been completed.
Middle East Broadcasting Networks, Inc. (MBN), which includes the services Radio Sawa and Alhurra, is an independent nonprofit grantee overseen by the Broadcasting Board of Governors (BBG). The BBG is an independent, federal agency responsible for overseeing all U.S. government-sponsored, nonmilitary, international broadcasting programs. The BBG also manages the operations of the International Broadcasting Bureau, the Voice of America (VOA), and the Office of Cuba Broadcasting, which are all federal entities. It also provides funding and oversight to three independent grantees, MBN, Radio Free Europe/Radio Liberty (RFE/RL), and Radio Free Asia (RFA) (see figure 1). In March 2002, due to concerns about the effectiveness of its outreach to Arabic speakers in the Middle East, the BBG replaced VOA’s Arabic radio service with Radio Sawa, a 24-hour, 7-day a week, Arabic language radio station. In April 2003, Congress, at the request of the administration, provided $26 million in the fiscal year 2003 Emergency Wartime Supplemental Appropriations Act to establish a nonprofit corporation, the Middle East Television Network (MTN), as a grantee of the BBG to launch and operate Alhurra TV. Ten months later, on February 14, 2004, Alhurra, MTN’s Arabic-language satellite television station, was launched and initially started broadcasting 14 hours a day, expanding to 24 hours a day 2 months later. In November 2003, the fiscal year 2004 Emergency Supplemental Act for Defense and for Reconstruction of Iraq and Afghanistan included $40 million to establish a second 24-hour channel, Alhurra-Iraq. On April 27, 2004, Alhurra-Iraq was launched. In 2005, MTN was renamed the Middle East Broadcasting Networks, Inc. (MBN), and Radio Sawa transferred to MBN. See figure 2 for a timeline of these events and figure 3 for details on the current organization of MBN. Congress appropriated more than $274 million to fund Radio Sawa, Alhurra, and Alhurra-Iraq from fiscal year 2002 through fiscal year 2006. Each year MBN’s funding level has increased to support additional 24-hour television streams and Radio Sawa’s 24-hour radio programming. MBN’s grant in fiscal year 2006 is $78.7 million to support Alhurra, Alhurra-Iraq, and Radio Sawa, as well as the launch of Alhurra-Europe. See figure 4 for a breakdown of funds by fiscal year. The Alhurra networks and Radio Sawa are BBG’s priority broadcasting services designed to support the BBG’s antiterror broadcasting initiatives in the Middle East and counter media campaigns used by terrorists by providing accurate reporting and analysis of the news and by explaining U.S. policies. Although MBN and its Alhurra broadcasting services postdate the BBG’s current 2002-2007 strategic plan, Radio Sawa, in particular, was singled out as an opportunity in the plan to target Arabic-speaking youth and provide them with news that is objective, comprehensive, fresh, and relevant and to provide a forum for reasoned discussion of “hot button” issues and U.S. policies. MBN’s current mission statement is to broadcast factual, timely, and relevant news and information about the Middle East, the United States, and the world to people of all ages in order to advance the long-term U.S. interests of promoting freedom and democracy and enhancing understanding in the Middle East. Radio Sawa and Alhurra aim to be among the sources that audiences turn to in the Middle East for news and information, to increase the standards of other broadcasters in the region, and to offer distinctive and provocative programming unavailable on other stations. MBN’s target audience includes 19 Arabic-speaking countries and territories in North Africa, the Near East, and the Gulf region, which are home to approximately 250 million people. In addition to its headquarters in Springfield, Virginia, MBN has several overseas offices, including a production center in Dubai that broadcasts Radio Sawa’s live newscasts during 8 hours each day and produces some opinion features for Radio Sawa and current affairs programming for Alhurra (see figure 5). Radio Sawa’s broadcasts are designed to reach a target audience of 15- to 29-year olds in the Middle East with Western and Arab popular music, news broadcasts, and specialized programming. Radio Sawa broadcasts 24 hours of programming every day through a combination of FM, medium wave (AM), digital audio satellite, and Internet transmission resources. See figure 6 for a map of Radio Sawa’s regional reach. Radio Sawa has developed seven distinct programming streams, including (1) Iraq, (2) Jordan and the West Bank, (3) the Gulf, (4) Egypt and the Levant, (5) Morocco, (6) Sudan and Yemen, and (7) Lebanon. All of the streams generally feature the same major newscasts, current affairs, and policy features; however, the Iraq program differs slightly, and the streams all offer differentiated music programs. Radio Sawa’s streams broadcast between 31 and 35 hours of news each week. (See appendix II for more on Radio Sawa’s programming.) The Alhurra satellite television station is designed to reach a broad audience in the Middle East by providing news, current affairs, and entertainment programming 24 hours a day, 7 days a week. Alhurra-Iraq is designed to provide Iraqi citizens with daily newscasts and talk shows that specifically address issues in Iraq. Both Alhurra networks broadcast between 36 and 43 hours of news and news updates a week. Alhurra broadcasts on the Arabsat and Nilesat satellites, which currently allow it to cover the entire Middle East. Alhurra-Iraq also broadcasts through these satellites and a combination of terrestrial transmitters made available in Iraq. Alhurra also expects to start broadcasting to Europe on August 1, 2006. (See appendix II for more information on Alhurra and Alhurra-Iraq programming.) MBN faces a variety of challenges to broadcasting in the Middle East, including operating in a competitive satellite television broadcast market, operational and programming competitive disparities, and lack of coverage for Radio Sawa in certain FM markets. MBN has conducted some planning efforts and, by using market research and internal assessments of its competitors, has undertaken or proposed some initiatives to address many of these challenges, such as increasing its hours of news coverage and current affairs programming for Alhurra and increasing the amount of local content Radio Sawa broadcasts. However, MBN has not developed a long- term strategic plan that fully addresses its operational and competitive challenges. MBN faces several significant competitive challenges. These include the competitive Middle East satellite television market; operational and programming competitive disparities, such as Alhurra’s lack of news bureaus compared with its competitors; and lack of coverage in certain FM radio markets. MBN operates in the competitive Middle Eastern satellite television market that has over 140 channels. Pan-Arab satellite television stations–in particular, the news stations Al Jazeera and Al Arabiya--are currently the primary competitors to Alhurra. According to the BBG’s research firm Intermedia, Al Jazeera is currently the top international broadcaster as a source of news and information for audiences in many countries throughout the Middle East. Moreover, Alhurra will face new competition from the BBC’s entry to the Middle Eastern satellite television market in 2007. BBC officials have indicated that the new station’s overall approach in the region will be multimedia in focus, taking advantage of the BBC’s more than 60 years experience of broadcasting on the radio to the region as well as its award-winning Arabic-language news Web site. For Radio Sawa, the primary competitive challenge comes from existing local radio stations in its broadcast range and the BBC World Service in Arabic, as well as from the generally increasing competitiveness of the Middle East radio market. Alhurra also faces operational and programmatic competitive disparities, since both Al Jazeera and Al Arabiya are estimated to receive significant, although unknown, levels of funding from their respective supporting Qatari and Saudi financiers--allowing them to develop large networks of correspondents and bureaus throughout the Middle East and other parts of the globe. Al Jazeera, in particular, has bureaus in over 30 locations across 6 continents, which enable it to respond to breaking news events on a timely basis. Alhurra and Radio Sawa, by comparison, only have overseas bureaus in Baghdad, Dubai, and Amman. In addition, the BBC has a large network of correspondents and bureaus around the globe and, unlike Alhurra, has a vast in-house library of desirable BBC-produced content, including documentaries and current affairs programming, which can be readily translated into Arabic. The BBC also has favorable licensing and co- production arrangements with many companies. One of Radio Sawa’s other primary challenges is its lack of broadcast coverage in certain countries in the Middle East region. For example, Radio Sawa does not have any broadcasting coverage in Tunisia, Libya, and Algeria. Moreover, it has faced difficulties expanding its transmission to include FM coverage in some countries, such as Egypt, Saudi Arabia, Syria, Yemen, and Oman. MBN is attempting to negotiate transmission agreements with several of these countries, but still faces significant challenges to finalizing agreements. Since its inception, MBN has conducted some planning exercises to address its competitive challenges. These have included developing a “2006 Goals and Strategies” overview document to guide operations for the current fiscal year, establishing a 2006 annual performance plan as part of the Office of Management and Budget’s Program Assessment Rating Tool process, and participating in the development of the BBG’s new long-term strategic plan covering fiscal years 2008-2012. In addition, MBN conducts ongoing assessments of its competitors and uses various types of market research to gain information about its audience and media usage patterns in the Middle East. It has used this information to make adjustments to its programming within its current budget, and also to develop proposals for obtaining additional funding for new efforts. BBG and MBN officials have explained that they use audience surveys, audience monitoring panels, focus groups, in-depth interviews, and Arab television and music station monitoring to inform MBN’s current efforts and planning, whenever possible. As a result of market research performed in August 2004, MBN officials identified television viewing patterns and made changes to Alhurra program schedules, such as by offering programming appealing to women (e.g., current affairs and health and fitness programming) during the daytime. Through a review of the current competition in the market, MBN officials decided that it was important to increase the number of debate programs they broadcast on Alhurra. As a result, MBN created a series of “town hall meetings” that allowed journalists and experts to discuss issues of regional interest with interaction from a live audience. As a result of audience monitoring panels, MBN officials made changes to Radio Sawa’s program schedule by adding new features on subjects such as social and cultural issues. MBN officials also learned of the importance of efforts to localize those features, and made changes to tailor programs to the interests of audiences of the various Radio Sawa streams. MBN has also developed program enhancement proposals for Alhurra and Radio Sawa as part of the BBG’s language review and budget request processes. Several proposals were included in the President’s fiscal year 2007 budget request. For example, the budget request includes a proposal to increase Alhurra’s newsroom hours to increase on-the-spot and breaking news coverage. In addition, in fiscal year 2005, the administration requested additional funds for providing satellite Alhurra broadcasts to Europe. According to planning documents, many of the Alhurra proposals were designed to reinforce one another with the goal of improving Alhurra’s credibility, as well as building audience size and increasing viewing time of those who already tune in. MBN also developed one proposal for enhancing Radio Sawa’s operations that calls for increasing the amount of localized news content offered on five regional streams, which officials say would allow the station to more effectively compete with local stations in its broadcast range. MBN’s president said that, given the increasing level of competitiveness in radio broadcasting in the Middle East and expressed audience interest in news about their home country, creating more localized content on Radio Sawa streams is important. Strategic planning is a good management practice for all organizations. Although MBN has conducted some planning exercises, it lacks a long-term strategic plan and a strategic approach that outlines (1) a shared vision of operations for Alhurra and Radio Sawa, (2) detailed implementation strategies to achieve measurable outcomes for its goals, and (3) the competitive challenges it faces and how it plans to address its key challenges to broadcasting in the Middle East. Strategic planning, including the development of a strategic plan, is a good management practice for all organizations. Additionally, risk assessment is an integral part of strategic planning. According to GAO guidance, organizations should make management decisions in the context of a strategic plan, with clearly articulated goals and objectives that identify resource issues and internal and external threats, or challenges, that could impede the organization from efficiently and effectively accomplishing its objectives. Additionally, Office of Management and Budget (OMB) guidance suggests that strategic plans contain, among other things, a statement of the organization’s long-term goals and objectives; define approaches or strategies to achieve goals and objectives; and identify the various resources needed and the key factors, risks, or challenges that could significantly affect the achievement of the strategic goals. MBN has yet to create its own long-term strategic plan. MBN’s president stated that funding uncertainties and other more pressing organizational needs--such as the development of financial and administrative policies and procedures--have delayed the development of MBN’s strategic plan and related planning policies. In addition, he commented that MBN did not emphasize planning in its early stages because it was focusing on making its networks broadcast-ready. BBG officials said another reason for the delays in planning is that the BBG and MBN are still learning about the market, especially for Alhurra, and are taking a close look at the results of audience surveys, focus groups, and in-depth interviews to determine the best direction for these initiatives. MBN has stated that, to date, it has primarily used the BBG strategic plan for organizational guidance. Nevertheless, BBG officials said the BBG also has the expectation that all broadcasting entities will develop their own strategic plans, particularly to guide funding decisions. In the absence of a strategic plan of its own, MBN lacks a comprehensive, strategic approach that fully outlines (1) a shared vision of operations for Alhurra and Radio Sawa, (2) detailed implementation strategies to achieve measurable outcomes related to its goals, and (3) the competitive challenges it faces and how it plans to address them. First, MBN does not have a comprehensive strategic vision for the integration of Radio Sawa and Alhurra operations in the organization. For example, although MBN’s most recent annual performance plan contains a goal to “integrate news operations for more effective television and radio news-gathering,” none of MBN’s current plans outline specific, shared objectives for Radio Sawa and Alhurra. MBN officials told us that several steps toward integration of Alhurra and Sawa have occurred to date, such as sharing financial and administrative support staff. However, Radio Sawa and overseas bureau staff we talked with said that cooperation between Alhurra and Radio Sawa is limited, the identities of the stations are separate, and the two stations work largely independently of one another. Radio Sawa staff noted several areas for further increasing cooperation, including more sharing of interviews, sound bites, field correspondents, Web site stories, and copy editors. One staff member in the Baghdad bureau said that Radio Sawa and Alhurra operations in Iraq are completely independent, including separate offices, and only the financial activities of both offices are supervised by the same person. In addition to gaining more efficiency in operations, a vision for further integration of Radio Sawa and Alhurra may help MBN more effectively identify opportunities to address its challenges from increasing competition in the Middle East. Second, MBN has not yet developed detailed implementation and resource strategies needed for successful implementation of its goals. For example, with regard to MBN’s initiative to localize content on Radio Sawa, we were not able to identify a plan directing what types of local news and features will be considered on the various streams, to what degree existing program schedules might be affected, and how required resources might be divided among the various streams. Additionally, although MBN’s most recent annual performance plan states a goal of “expanding overseas production of news coverage for radio and television,” neither that document nor any other plan MBN identified provides details or direction for the overseas production of news for Alhurra in existing overseas offices. MBN officials have stated that uncertainties in the future commitment of resources to Alhurra have affected MBN’s ability to, for example, plan for and use existing overseas offices for Alhurra news. Further, BBG officials have said MBN is still learning about the Middle Eastern media market. However, given MBN’s internal enhancement requests to the BBG to increase the number of news bureaus in the region, among others, it should clarify, for example, what implementation steps are necessary to maximize the use of existing overseas offices. Third, MBN has not yet comprehensively outlined its challenges or developed a strategic approach for how it plans to address its key challenges to broadcasting in the Middle East. While MBN is planning to expand its broadcast operations into Europe, it has not clearly identified how its broadcasts will meet competitive challenges in the Middle East. As an example, MBN has not indicated how it will address the implications of the upcoming BBC Arabic-language television initiative. The BBC could gain a significant audience that potentially would interfere with Alhurra’s market share, credibility, and use as a source of alternative information. MBN was initially limited in developing its internal control structure because it was focused on quickly starting up its broadcasting operations. In response to an external review of its financial operations by Grant Thornton LLP in May 2004, MBN strengthened several of its controls, after which it received an unqualified opinion on its Fiscal Year 2005 Single Audit. However, MBN has not fully implemented several of the Grant Thornton review’s key recommendations related to its control environment, including (1) establishing an internal control board to formally develop its controls and coordinate audits, (2) preparing an internal control plan, (3) conducting a risk assessment to address potential risks to its operation, and (4) developing a training program for its staff. Internal control refers to the policies and procedures that help ensure the proper management and application of an organization’s assets. Clear, strong controls can provide some assurance that management problems are unlikely to occur or will be addressed if they do occur. MBN’s internal control is governed by several OMB circulars cited in its grant agreement. The Comptroller General’s Standards for Internal Control in the Federal Government also provides guidance that is available to MBN. MBN faced some initial challenges in establishing its internal control structure. According to MBN documents and officials, MBN management initially focused on establishing broadcasting operations rather than the development of internal control policies and procedures, because the organization had only several months to plan the launch of its 24-hour a day Alhurra television network. As a result, MBN’s internal control lagged behind. The MBN chief financial officer (CFO) told us that problems encountered by MBN in hiring and retaining staff added to the delay in developing internal controls. Due to concern over the slow development of MBN’s internal control structure, the BBG commissioned a review by Grant Thornton LLP accountants and management consultants, which was completed in spring 2004, to assess MBN’s internal control and make recommendations for improvement. The report by Grant Thornton LLP cited numerous findings, such as inadequate financial policies and procedures, understaffing, and inadequate training, that impeded MBN’s ability to successfully mitigate risks. The report, however, also noted that MBN’s controls were improving. Grant Thornton’s May 2004 report on MBN’s system of internal control made recommendations that covered staffing, MBN’s financial system, training, administrative policies and procedures, developing a decision support structure, improving logs and records, and MBN’s control environment. MBN accepted the review’s recommendations and agreed to implement them, according to MBN officials, and the BBG concurred with these recommendations. Our analysis shows that MBN subsequently responded to many of the recommendations, including hiring additional financial department staff, developing financial and administrative policies and procedures, and completing an annual single audit (see table 1). For example, although MBN provided us with copies of its Fiscal Year 2003 and Fiscal Year 2004 Single Audits well past the deadlines for completing those audits, MBN’s Fiscal Year 2005 Single Audit was completed on time, provided an unqualified opinion, and showed marked improvement over previous years. Some control elements could be improved in order to better implement best management practices based on OMB circulars and GAO internal control standards. For example, although MBN is establishing an internal control board, the board has not met to establish protocols and outline its responsibilities. The organization also has not developed an internal control plan. Furthermore, MBN has not established a comprehensive process to analyze risks the organization faces from internal and external sources. Finally, MBN has provided some training on internal control but has not yet developed a regular structured training program for its staff. The Grant Thornton review recommended that MBN establish an internal control board of key managers and officers to determine the internal control risks facing MBN, work towards decreasing these risks, and oversee MBN’s efforts to employ strong controls. Moreover, according to GAO guidance, organizations should have an audit committee or senior management council–similar to an internal control board–that reviews the internal audit work and coordinates closely with external auditors. According to the MBN president, MBN is establishing an internal control board consisting of three members, including, as recommended, the MBN president, general counsel, and CFO. However, the board has not yet formally met to establish protocols and outline responsibilities. In addition, according to MBN’s general counsel, MBN has not appointed a member from the BBG, as was recommended, to serve on the board. MBN’s executive committee, which examines issues affecting MBN and reports back to the BBG, has provided some support on management and administrative issues, such as approving the construction and expansion of MBN’s new facility and providing guidance on hiring high-level staff. However, the committee has not fulfilled the role of an internal control board as previously described. Instead, MBN’s CFO has largely taken on the sole responsibility of establishing and overseeing MBN’s controls, reviewing audits, and coordinating with external auditors. The MBN president told us that they have not convened an internal control board because the organization is too new and, therefore, is focused on developing policies and procedures rather than mechanisms to review them. MBN is planning to develop an internal audit function, implemented by an external firm, to provide assurance to MBN management that the organization is operating appropriately. According to the Grant Thornton review, MBN should develop an internal control plan to ensure that effective controls are established and monitored regularly. Such a plan should identify the roles and responsibilities of all individuals whose work affects internal control, lay out specific control areas, cover risk assessment and mitigation planning, and include monitoring and remediation procedures. MBN officials told us in January 2006 that they were in the process of developing such a plan and have developed internal control guidelines, but as of the end of May 2006, they had not provided us with a finalized plan. The Grant Thornton review called on MBN to conduct a broad risk assessment, led by its internal control board, to evaluate and mitigate potential obstacles to efficiently and effectively achieving its operational objectives. According to Grant Thornton LLP, failure to conduct an MBN- wide risk assessment could result in the loss of resources and could decrease confidence of the grantor and of Congress, which could ultimately compromise MBN’s achievement of its mission. According to GAO guidance, risks should be identified during both short- and long-term forecasting and as part of strategic planning. Moreover, after conducting a risk assessment, organizations need to develop internal control activities to manage or mitigate the risks that have been identified. For example, Radio Free Asia identified avian influenza and signal jamming from China as two threats to their operations. As part of their risk assessment, they considered how to address and overcome these issues, such as by broadcasting from alternate locations. In February 2005, MBN prepared an initial risk assessment that identified a list of actions taken to address the issues raised in the Grant Thornton LLP report. However, the document did not identify the organization’s objectives or the risks it faces, nor did it analyze the possible effects of the risks or propose a strategy to mitigate them, as recommended by GAO guidance. MBN officials told us that they assess risk on an ongoing, biweekly basis. However, in taking a short-term approach to analyzing risk, MBN lacks a comprehensive basis from which to establish a strong internal control structure. Some risks identified by MBN include threats to the security of its staff and bureaus in the field, particularly in Iraq, and the risk of a terrorist attack on its facilities in the United States. There are some risks that MBN has not identified or addressed. For example, PriceWaterhouseCoopers, MBN’s external auditor, noted that MBN has not adequately addressed its risks related to information security. Doing so would reduce the risk of security incidents and unauthorized system activity, according to the auditor. PriceWaterhouseCoopers also found that MBN’s lack of a business continuity plan or an adequate disaster recovery plan could result in slower recovery in the case of such an event, as well as significant loss of revenue, inability to meet customer needs and third party obligations, and potential noncompliance with legal requirements. The internal audit function planned by MBN may at some point take on the function of assessing risk, but this body is not yet operational. MBN has provided some training on internal controls but has not yet developed a regular structured training program for its staff of about 240, as recommended by the Grant Thornton LLP review and by GAO leading practices. MBN’s CFO and controller attended a seminar on grants management in October 2004 and subsequently shared the information with the 16 in-house financial staff. MBN also provided internal control compliance training to its managers in December 2004, and the organization regularly provides training to staff at its business manager meetings. However, other U.S. broadcasting entities, such as Radio Free Europe/Radio Liberty (RFE/RL) and Radio Free Asia (RFA), have more organized, ongoing training programs on internal control. The MBN CFO concurred that there is a great need for internal training. According to him, the underdeveloped training situation is due to a lack of resources, including a lack of funds specifically designated for training and limited time to plan or implement training. This lack of structured, recurrent internal control training can cause problems if staff are unfamiliar with an organization’s business processes and controls, and can lead to the inefficient or improper use of resources. MBN has established journalistic standards, as well as procedures to help ensure that the organization’s broadcasts comply with these standards. However, it has not fully developed some quality control measures, such as the use of listener and viewer feedback. Additionally, the BBG has not held regular comprehensive program reviews for MBN, thereby making it difficult for MBN to assure its audience, Congress, the administration, and the BBG that its controls are working and that it is broadcasting quality programming. The International Broadcasting Act of 1994 calls for U.S. international broadcasting to be conducted in accordance with the highest professional standards of broadcast journalism, including the production of news that is consistently reliable, authoritative, accurate, and objective. The act also calls for U.S. international broadcasting to present a balanced and comprehensive projection of U.S. thoughts and institutions, as well as clear and effective presentation of U.S. government policies and responsible discussion of those policies. MBN’s mission statement, which partly draws upon the principles and standards contained in the U.S. International Broadcasting Act of 1994, calls for MBN to broadcast factual, timely, and relevant news and information that promotes freedom and democracy. MBN has developed journalistic standards, including a code of ethics, as part of its effort to ensure that its news broadcasts are consistently accurate, authoritative, objective, balanced, and comprehensive. MBN’s general counsel said that the code was also established to ensure that MBN fully complies with its mission and the U.S. International Broadcasting Act of 1994. MBN’s journalistic standards were drafted by MBN management using input from professional journalistic organizations and another grantee. According to BBG officials, MBN’s standards appear to be as good as those of other U.S. international broadcasters. In our analysis, we found that the standards cover areas similar to the codes of other broadcasters-- such as RFE/RL, RFA, Voice of America (VOA), and National Public Radio-- focusing on accuracy, impartiality, establishing context, clearly distinguishing analysis from reporting, using a tone of moderation and respect, avoiding advocacy, and promoting ethical conduct. The standards also include guidelines for conducting interviews, as well as editing and production requirements. For example, according to MBN officials, MBN strives to present opposing views accurately and achieve a balance among the guests on its current affairs shows. According to MBN officials, when broadcasting about the war in Iraq, they try to ensure that the program incorporates both pro-war and antiwar views. To help ensure that staff comply with MBN’s journalistic standards, Radio Sawa and Alhurra have established a number of pre- and postbroadcast procedures, which are roughly similar to those of other U.S. international broadcasting entities. Examples of MBN editorial procedures include daily editorial meetings, the use of two or more sources to support a news item, checks by editors and producers to determine whether news stories are properly written and accurate, a headquarters-level review of all materials produced in MBN’s overseas offices before broadcast, and postbroadcast discussions. In addition, we observed that MBN employs an experienced journalist from the Arab world to review all of Alhurra’s and Alhurra-Iraq’s weekday newscasts for technical and stylistic errors, a control not implemented by other U.S. broadcasting entities. The journalist watches the newscasts just before they are aired, then provides MBN management with an evaluation of the newscast’s quality. MBN provided us with records of these evaluations, which assess whether the newscasts are presentable, balanced, and free of technical errors. In some cases, technical mistakes can be caught before the piece is aired. However, while MBN management follows up on critiques of individual journalists, they do not systematically review and assess the journalist’s evaluations. MBN officials told us that the organization places a high value on journalistic controls, particularly due to the volatility of certain areas in the Middle East and the impact news reports can have. In addition, MBN’s controls can serve as an assurance to its audiences and others that they are broadcasting quality programming. Since Radio Sawa and Alhurra were established, MBN has not had to retract a single story or apologize for any error, according to MBN officials. There are several areas in which MBN could more fully develop some quality control measures for its programming. These areas include using listener and viewer feedback to improve program quality, making better use of weekly compilations, and ensuring its style guide is distributed to all staff. Unlike other U.S. international broadcasters, MBN typically only partially utilizes the following measures: Although MBN collects feedback from its listeners and viewers, it does not rely extensively on this feedback as a program quality control. Although MBN produces weekly compilations that are distributed to interested parties summarizing what was broadcast on Radio Sawa and Alhurra that week, it does not use these compilations to do any formal, long-term analysis of errors and programming. MBN has developed a style guide to provide critical guidance on the use of sensitive terms and to help staff avoid grammatical mistakes. However, Radio Sawa staff did not receive the guide until early 2006, and MBN has not distributed it to all of its overseas offices, inhibiting its use by some staff. In addition, MBN has not provided regular training to its journalists and producers. MBN told us that it does not provide regular training to help its editorial staff maintain and increase their professional competence. MBN’s level of training for its editorial staff is also not on a par with the training offered by other grantees. For example, RFE/RL and RFA both have extensive training programs for their employees at their headquarters and in their bureaus in different countries, according to RFE/RL and RFA officials. While MBN provides some initial technical training to its journalists, an informal mentoring program, and sporadic training overseas, according to MBN officials, the organization does not have an ongoing training program to educate journalists throughout the organization about editorial and ethical issues they might encounter on the job. According to the MBN CFO, the organization does not have a well- established training program because the network is relatively new and lacks training resources. MBN officials have also noted that their most important controls are the editors themselves, and that MBN tries to hire experienced staff. However, regardless of experience, staff can and do make mistakes, and MBN’s lack of regular training increases the risk that correspondents and editors will make mistakes. The BBG’s main mechanism to determine whether its broadcasting services comply with its mission and journalistic standards is a regular program review, which is designed to improve programming and ensure quality control. However, only one review has been conducted for Radio Sawa, and none have been conducted for Alhurra. The Radio Sawa review was more limited in scope than program reviews conducted by other entities. In addition, Radio Sawa’s program quality score is inconsistent with other BBG entities, and without a program review to develop a program quality score for Alhurra, the BBG will not be able to measure the contribution of these efforts to the goals of the organization, or be able to ensure that the quality of Alhurra’s broadcasts conforms to applicable standards. Finally, many MBN staff were not extensively involved in the Radio Sawa review. According to written guidance from the International Broadcasting Bureau, which coordinates and supports all VOA program reviews, program review is an annual process by which an institution judges itself and solicits the judgment of others to make improvements and fulfill its mission with regard to U.S. national interests. The process enables the broadcasting entity to better connect its mission to the market where it is broadcasting and assess whether its editorial procedures are functioning effectively, while also allowing the BBG to fulfill its requirements from OMB that it conduct regular program evaluations to capture a program’s impact over time. In addition, the Senate Committee on Foreign Relations Report on the Fiscal Year 2003 Foreign Relations Authorization Act called for significant resources to be dedicated to postbroadcast analysis of Radio Sawa programming to ensure that broadcasts are consistent with U.S. interests and values and with the standards in the U.S. International Broadcasting Act. Program review typically includes a study of the target area’s media environment, analyses by internal and external reviewers, background quantitative research, reports on marketing and transmission, and target area profiles compiled by the broadcaster itself. Reviewers rate programs based on criteria for content--such as accuracy, timeliness, objectivity, relevance, and quality of analysis and interviews--and for presentation, such as pace and liveliness, presentation style, sound quality, and host interaction. These inputs are then discussed at a meeting that includes the program review coordinators and the management and staff of the entity being reviewed. Following the main program review meeting, key participants develop an action plan, and 3 months later the group meets again to determine to what extent the action plan has been carried out. Although the BBG calls for program reviews to be conducted annually, MBN has not complied with this guidance. Neither the BBG nor MBN has a regular mechanism in place to systematically review MBN programs. In December 2004, the BBG convened a program review meeting for Radio Sawa, which had begun operating in March 2002. However, there has been no Radio Sawa review since then, while Alhurra, which has been operating for more than 2 years, has had no program review at all. The BBG is planning to initiate a program review of Alhurra by the end of this year, but has not set a firm date. BBG officials have stated that program quality should be sampled and assessed by both internal and external evaluators. This is designed to produce a balanced and robust review. In 2000, in response to our report that found a lack of consistency in how program quality scores were developed, the BBG stated that it intended to harmonize and standardize program reviews across broadcasting entities. BBG guidance now calls for all U.S. international broadcasting entities to be evaluated using the same standards, definitions, and scoring methods. However, when conducting the Radio Sawa program review, the BBG relied only on audience monitoring panels to assess program quality and did not utilize internal analysts or external control listeners, as is common practice among other U.S. international broadcasters. Using only audience monitoring panels gives the audience more weight in the review results and, in turn, more potential to influence the strategic direction of the organization. In addition, since the audience tends to be unfamiliar with a broadcasting service’s journalistic standards and editorial procedures, having input only from monitoring panels makes it more difficult for a service to provide reasonable assurance that its editorial procedures are working and that it is broadcasting quality programming. Also, external control listeners are specifically tasked with examining the programming in light of the service’s mission, something audience monitoring panels are not asked to do. As a result, Radio Sawa’s program review strongly emphasized the audience’s perspective and therefore provided a less thorough evaluation of Radio Sawa’s mission and standards. (For more information on Radio Sawa’s program review, see appendix III.) Furthermore, MBN’s program score for Radio Sawa is not comparable with other BBG broadcaster scores for program quality--the percentage of a station’s language services judged on both content and presentation criteria as being of good-or-better quality. The BBG says that to measure Radio Sawa’s program quality, it has developed standardized criteria applicable to different media and methods of delivery, while minimizing subjective judgments on content and presentation. Although the criteria used to measure Radio Sawa’s program quality are similar to those used by other broadcasting entities, the BBG did not use as many inputs when calculating the program quality scores for Radio Sawa, leading to a less robust result (see table 2). Without a consistent process for broadcaster program reviews, the BBG is limited in its ability to assess and compare broadcaster performance. In addition, without a program review to develop a program quality score for Alhurra, the BBG will not be able to measure the contribution of these efforts to the goals of the organization, or be able to ensure that the quality of Alhurra’s broadcasts conforms to applicable standards. Currently, the BBG is considering how it will conduct the future program reviews for Radio Sawa, but officials could not yet provide specifics about the approach BBG will use. MBN staff in the United States and overseas do not appear to have much knowledge of the Radio Sawa review or its follow-up. Program review can be a learning experience for staff, who are usually encouraged to sit in on their language service’s program review meeting, and the process can also bring people together who normally do not interact, which can help generate ideas and improvements. However, according to MBN management, most Radio Sawa staff did not attend the Radio Sawa program review. In addition, there was also not much awareness about the program review in MBN’s overseas offices. While Radio Sawa staff lacked awareness about their review, some staff who had worked at VOA in the past spoke positively of the program review process in general. Other broadcasters routinely involve staff in their program reviews. For example, RFA requires all staff from the service that is being reviewed to be present at the program review meeting. According to an RFA official, program review is one of the few times that all of the key players for a service are in one place, and it therefore presents a good opportunity for communication. In addition, a BBC official noted that program review can be a good way to expose staff to an organization’s values. Increasing staff involvement in program review could therefore increase chances for communication throughout all levels of the organization, as well as provide a forum for discussing potential programming improvements. The BBG has established several standard performance indicators and targets for MBN programs, including measures of audience size and credibility; however, it has not implemented some performance indicators fully, including a program quality measure. Additionally, we were unable to determine the accuracy of MBN’s audience size and program credibility estimates due to weaknesses in MBN’s methodology and documentation. Therefore, it is not clear whether the Radio Sawa and Alhurra performance targets have been met. The Government Performance and Results Act of 1993 (GPRA) requires that all government agencies establish performance indicators, or measures, that provide a meaningful reading of how well the organization is progressing towards its goals. The BBG has developed a standard set of performance indicators for its broadcasting entities, which it says are a best effort to measure its level of effectiveness now and where its performance is targeted to be in the future. The BBG established common indicators for its entities to allow it to better assess overall progress for the organization. Although the BBG has made progress in establishing these standard performance indicators, as well as targets for MBN’s programs, it has not formally established or implemented all of them for Radio Sawa and Alhurra. According to BBG, the three most important standard indicators for its entities, referred to as primary performance indicators, are (1) audience size, or the overall weekly audience of a station; (2) credibility, which represents the percentage of viewers in a target area that consider the station’s news programs somewhat or very reliable; and (3) program quality, or the percentage of a station’s programs judged on standard criteria as being of good-or-better quality. These three indicators are tied to the current BBG mission and strategic plan for U.S. international broadcasting. In addition to its standard primary performance indicators, the BBG has a number of “secondary” measures that provide management with additional information for gauging cost- effectiveness, marketing and promotion activities, and transmission efforts, including the number of transmitters and affiliates, cost per listener, signal strength (radio only), and awareness. The BBG has established and implemented two primary indicators– audience size and credibility--for both Radio Sawa and Alhurra. The BBG has also established performance targets for Radio Sawa and Alhurra for these two indicators. For example, MBN’s reported fiscal year 2005 audience size performance target for Radio Sawa was 18 million listeners and for Alhurra was 12.8 million viewers. However, the BBG has not yet established Alhurra’s program quality indicator, or consistently implemented this measure for Radio Sawa. Without a measure of program quality for Alhurra and Radio Sawa, the BBG will not be able to consistently assess MBN’s performance against that of other grantees, or fully assess MBN’s contribution to the overall goals of the BBG organization. The BBG has established all of its standard secondary performance indicators for MBN’s services except for audience awareness. See table 3 for a list of the performance indicators implemented for Alhurra and Radio Sawa. We and others have noted that agencies’ performance indicators and data should provide a reliable means to assess progress. However, we were unable to determine the accuracy of MBN’s reported audience size and program credibility estimates due to weaknesses in MBN’s methodology and documentation. Therefore, it is not clear whether the Radio Sawa and Alhurra performance targets have been met. While BBG has taken several important steps to ensure the validity and reliability of its performance measurement approach, it has primarily used a methodology that cannot be reliably projected to the broader population. Although it is difficult to conduct probability sampling in many locations in the Middle East, the BBG has not taken steps to explain and increase the reliability of MBN’s performance information, such as by maintaining more detailed documentation to support its estimates, reporting significant data limitations, limiting the scope of its projections to areas actually covered by its surveys, and developing BBG policies and procedures for verifying performance data. We have previously reported that performance indicators should provide a reliable way to assess progress. In particular, agencies should implement quality control procedures to mitigate errors that can occur at various points in the collection, maintenance, processing, and reporting of performance data and can impact its reliability. In addition, agencies should select sampling methods that ensure representative samples, where possible. For example, probability surveys are designed to ensure each person in the population has a measurable chance of being selected for the survey, enabling the results to be reliably projected to the larger population with known levels of precision. Additionally, agency performance reporting should provide sources, disclose limitations, and discuss the implications of them. Explaining the limitations of performance information--as well as actions taken to compensate for low-quality data–can provide context for understanding and assessing agencies’ performance and the costs and challenges that agencies face in gathering, processing, and analyzing needed data. The U.S. International Broadcasting Act of 1994 requires the BBG’s use of audience research to guide its decisions about its language services. For the past 5 years, the BBG has contracted with Intermedia to serve as the primary research contractor for the BBG and its broadcasting entities, including MBN. Intermedia works with subcontractors, partners, or both to gather audience research survey data from citizens in various locations overseas. This survey information is used by the BBG to develop estimates for MBN and other entities’ audience size and credibility performance indictors. For example, for fiscal year 2005, the BBG estimated that Radio Sawa and Alhurra had each achieved an audience size of about 21.6 million people. Table 4 shows the breakdown of individual survey data reported by the BBG. Although the BBG has taken several important steps to enhance the validity and reliability of its audience survey designs, several factors call into question the accuracy of the data used for BBG’s audience size and credibility performance indicators for Radio Sawa and Alhurra. First, we observed that, in 12 out of 14 cases, the BBG used nonprobability surveys, which cannot be reliably projected to the broader population, to develop its regional estimates for audience size and credibility. While it is difficult to conduct probability surveys in hostile environments, such as those in Iraq and Saudi Arabia, the BBG did not take certain steps that could have increased the accuracy of its estimates and explained their limitations, thereby increasing confidence in the data. Therefore, we are unable to determine whether MBN actually met its performance targets for fiscal year 2005. BBG has taken several important steps to enhance the validity and reliability of its audience survey designs. For example, the BBG’s questionnaires are reviewed by multiple parties, and its contractors extensively pretest the questionnaires in the field, conduct pilot studies, and use throwaway surveys for training new subcontractors. In addition, the BBG’s contractors exercise quality control when collecting data and receiving it from the field, including by conducting preliminary electronic testing of the data, among other things. However, in 12 of 14 cases, we found the country-level estimates used for generating Radio Sawa and Alhurra’s fiscal year 2005 performance indicators of audience size and credibility were not based on probability survey results. Many of the surveys conducted used judgment sampling, a form of nonprobability sampling, instead. BBG officials told us that its use of nonprobability surveys for certain countries is due to either the cultural, political, or security situation in those countries, which limits the selection of individuals or the geographical areas that can be surveyed. In addition, the International Broadcasting Bureau director of research stated that in many developing countries, existing map and population data is not adequate to support pure probability-based sampling. The Conference of International Broadcasters’ Audience Research Services, called CIBAR, whose standards are specified as a source of guidance for BBG research contractors, requires that audience measurement use samples based on the principles of random probability and that other sampling methods should only be used in cases where, for reasons of practicality or cost, proper random samples cannot be used. (See appendix IV for more on the CIBAR standards.) We recognize that many agencies face challenges in collecting credible performance data and that, due to security risks and political considerations in many Middle Eastern countries, it is not always possible to expect BBG to use random samples. However, the BBG did not take certain steps that could have explained and increased the reliability of its estimates, such as fully documenting its research methods, measuring the level of uncertainty surrounding its estimates, disclosing significant limitations, limiting the scope of its projections to areas actually covered by its surveys, and developing and implementing procedures for verifying data. First, the BBG and its research contractors were unable to provide us with certain documentation commonly required by international broadcasting research standards. CIBAR requires that, in all measurement research, the sampling methods used and other technical aspects of the survey be both fully and accurately described in the project documentation and open to independent scrutiny. We asked the BBG and its contractors to provide us with detailed documentation–including clear information on sampling plans and related assumptions, response rates, and adjustments applied to the data to reflect the effects of the survey design–for all 14 of its Middle Eastern country surveys used to develop its fiscal year 2005 performance indicator estimates. For two cases, the BBG was unable to provide us with any survey documentation, and for all but one case, the BBG and its contractors were unable to provide us all the detailed information we requested. BBG officials acknowledged the lack of complete documentation in contractor technical reports, and said it was due in part to their failure to follow up with the contractors to obtain the details, aswell as to the contractors’ general practice of not generating such detailed documentation. Second, the BBG has not sufficiently measured the level of uncertainty surrounding MBN’s performance estimates. CIBAR requires that in all measurement research, technical aspects of the survey, including margins of error and confidence levels where appropriate, be both fully and accurately described in the project documentation. The BBG has not been able to measure sampling errors for its surveys, in part because it has not required contractors to document the information that is needed to calculate the sampling errors accurately. Moreover, officials said that it is not customary for their contractors to maintain this information. Therefore, the BBG only has a rough idea of what the margins of error might be for its surveys, further limiting confidence in the reliability of its current performance information. One research official told us that he believed that the overall margins of error for Radio Sawa and Alhurra’s audience size and credibility estimates are large, but said that currently the agency cannot accurately calculate them. Third, the BBG lacks transparency in reporting data sources and significant limitations affecting MBN’s audience size and credibility performance information. In reporting performance data, agencies should provide data sources, disclose limitations, and discuss the implications of them. CIBAR standards also recommend that proper care be exercised when reporting estimates, to ensure that the type of audience covered by the estimate is clearly stated and that, at all stages in the calculation and extrapolation process, sources, assumptions, and methods be fully documented and available for independent scrutiny. However, the BBG has not sufficiently explained the specific methods used for generating estimates for its performance indicators, such as the number and names of the countries surveyed, methods of sampling used, sources of the population data, and basic procedures used to create the estimates. Moreover, the BBG has reported only two limitations for MBN performance indicators to date: (1) that credibility ratings are highly dependent on volatile political factors; and (2) that, depending on political, social, and media conditions, measurement of audience size may either be easily attained or impossible. However, the BBG has not explained many significant limitations, and their resulting implications, on the data reliability of the performance information for MBN. The largest identifiable limitations not reported are that many of the BBG surveys are not based on probability sampling, cover only part of the country, have very low response rates, or have high substitution rates; therefore, those results cannot be reliably projected to be representative of the larger population. In the case of the survey in Morocco, we calculated that the survey only covered 35 percent of the general population and had a 48 percent substitution rate, but the results were projected to represent a broader population. In addition, we calculated low response rates for a number of MBN’s surveys; in the case of the survey of Egypt, the response rate was about 19 percent. BBG research staff explained that their stakeholders to date, including the BBG, have not required such a level of detail in reporting, and specifically have not required margins of error. However, a discussion on data limitations in performance reporting can help decision makers determine their level of confidence in the agency’s ability to report on its performance goals and indicators and identify actions needed to improve its ability to measure performance. Fourth, we found the BBG has not taken sufficient steps to avoid projections to areas outside the population surveyed. For example, the elimination of projections to the rural population of Morocco, when only urban areas were surveyed, or to those governorates of Jordan not contained in the survey, would have resulted in estimates of audience size and credibility with fewer limitations for those countries. In addition, BBG project documentation does not clearly describe what steps have been taken to restrict the scope of MBN’s surveys to provinces or areas that have sufficient map or population data. Probability sampling projected to those areas only would result in more reliable estimates of audience size and credibility. Fifth, we found the lack of verification procedures inhibits assessments of BBG’s data quality. Data reliability is increased by the use of verification and validation procedures, such as checking performance data for significant errors by formal evaluation or audit. However, BBG has not fully implemented such procedures or formally documented policies and procedures governing its research. BBG officials said they have implemented some forms of verification, such as a research director review, at various stages of performance data analysis. However, it is not clear that the BBG always thoroughly verifies performance indicator estimates and calculations used for reporting. For example, we identified some errors in internal calculation spreadsheets for performance estimates, in official external reporting, and in an informal presentation on MBN performance. BBG officials have acknowledged that they need to develop policies and procedures and implement more rigorous verification of performance data. They said that, as a result of our review, they have taken some steps to expand their verification procedures and will document those and other research procedures in the BBG’s research contract and manual of operations. MBN was established to broadcast accurate and relevant news and information to the Middle East in order to advance freedom, democracy, and long-term U.S. interests in the region. MBN’s programs are challenged by existing television competitors, such as Al Jazeera and Al Arabiya, and planned initiatives such as the BBC’s planned Arabic-language television network; by numerous local radio stations across the region; and by limited opportunities for increasing coverage of radio transmission. It has attempted to address these challenges through some planning efforts, such as by developing proposals to increase its news time for Alhurra and expand its number of news bureaus in the Middle East. However, MBN has not developed a strategic plan or taken a detailed strategic approach to addressing certain issues, including identifying opportunities for additional gains that could be made from further integrating Radio Sawa and Alhurra or fully utilizing MBN’s overseas offices. Developing a strategic plan that establishes specific objectives, provides an overall shared vision or framework for decision-making, and comprehensively addresses competitive challenges could enable MBN to identify efficiencies or opportunities to address its competitors more effectively and increase congressional confidence in its operations. Although MBN has developed a number of financial and administrative controls, it could take additional steps to ensure its system of internal control is fully implemented. MBN’s planned internal control board needs to establish protocols to oversee and monitor its internal control structure and to ensure timely completion of MBN’s financial audits. Additionally, MBN should develop an internal control plan, conduct a comprehensive risk assessment, and develop an organized training program for its staff. Further, MBN has developed journalistic standards for its broadcast operations and has put in place a number of editorial procedures. However, the network has not conducted annual program reviews called for in BBG guidance. These reviews are a key mechanism for improving programming and ensuring quality control. Finally, the BBG has established performance indicators and targets for MBN services related to measures of audience size and program credibility. For example, in fiscal year 2005, the BBG reported that Radio Sawa and Alhurra had each achieved radio and television audiences of about 21.6 million. However, limitations in the survey methods and documentation used for developing these estimates raise questions about the accuracy of MBN’s performance estimates. In addition, the BBG has not put in place policies and procedures for verifying the accuracy of its performance information. These weaknesses in methodology and documentation inhibit an accurate assessment of whether the Radio Sawa and Alhurra performance targets have been met. The accuracy of the MBN estimates could be improved by more rigorous reliance on probability sampling, wherever possible, and avoidance of projections to areas not contained within the survey. At the same time, insistence on proper fieldwork documentation from contractors, full disclosure of survey methods and limitations, and greater transparency in the development of aggregate audience estimates would further enhance confidence in MBN performance estimates. To improve efforts to monitor performance and the efficiency and effectiveness of the broadcasting activities of the Middle East Broadcasting Networks, Inc., we recommend that the chairman of the BBG do the following: Require that MBN develop a long-term strategic plan, which incorporates a shared vision for Alhurra and Radio Sawa operations and details specific measurable objectives and implementation strategies for the successful implementation of the goals in the plan. Require that MBN implement the remaining recommendations from the Grant Thornton LLP report and require that its internal control board meet on a regular basis to coordinate MBN’s single audits and oversee MBN’s ongoing efforts to use sound internal control procedures. Develop a process for analyzing risk as part of strategic planning that identifies approaches to mitigate the potential obstacles to efficiently and effectively achieving MBN’s operational objectives. Require MBN to develop a comprehensive training program covering both internal controls and editorial procedures to meet the continuing needs of all employees. Initiate a schedule of annual program reviews for Radio Sawa and Alhurra to regularly ensure that the quality of Alhurra’s broadcasts conforms to applicable standards. Implement program quality performance indicators for MBN’s broadcast services, consistent with other BBG entities, to assess and compare their performance and measure the contribution of these efforts to the goals of the overall organization. Require research contractors to improve the methods used in audience research to allow for probability sampling and document the sample selection so that survey sampling errors can be calculated, where possible. Identify and report significant methodological limitations and the implications of them for performance indicators, including, where applicable, sampling errors, margins of error, or confidence intervals. Develop, document, and report policies and procedures for verification and analysis of performance indicator estimates. The Broadcasting Board of Governors (BBG) provided written comments on a draft of this report. The BBG’s comments, along with our response to specific points, are reprinted in appendix V. The BBG also provided technical comments, which we incorporated where appropriate. In general, the BBG concurred with all of our recommendations and said it looked forward to implementing them. The BBG said that MBN has made significant progress in the 2 years of its operations in establishing a sound journalistic organization with financial and administrative controls. However, the BBG raised a number of concerns about the report’s criticisms of the audience research conducted by the BBG and its contractors. Specifically, the BBG said that we did not fully understand the difficulty in surveying audiences in Middle Eastern countries and that the research practices used by the BBG and its contractors follow industry standards for commercial and media research. Our report examines the reliability of the BBG’s fiscal year 2005 performance information in order to determine whether or not MBN’s performance targets have been met. We acknowledge, in our report, that BBG has taken positive steps to enhance the validity and reliability of its audience survey designs. We also acknowledge that there are challenges to conducting audience research in the Middle East, and that there are tradeoffs between cost and data reliability when conducting research. In conducting our data reliability assessment, as referenced in the draft, we largely used the international audience research guidelines published by the Conference of International Broadcasters’ Audience Research Services (CIBAR), which are specified as a source of guidance for BBG research contractors. For example, CIBAR standards recommend that the proper care be exercised when reporting estimates to ensure that the type of audience covered by the estimate is clearly stated, and that, at all stages in the calculation and extrapolation process, sources, assumptions, and methods are fully documented and available for independent scrutiny. We were unable to determine the accuracy of MBN’s fiscal year 2005 audience size and program credibility estimates due to weaknesses in MBN’s methodology and documentation. As noted in the report, in several instances the BBG and its contractors departed from CIBAR research standards. In particular, the BBG did not take certain steps that could have explained and increased the reliability of its estimates, such as by fully documenting its research methods, measuring the level of uncertainty surrounding its estimates, disclosing significant limitations, limiting the scope of its projections to areas actually covered by its surveys, and developing and consistently implementing policies and procedures for verifying data. For these reasons, it is not clear whether the Radio Sawa and Alhurra performance targets have been met. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to others who are interested and make copies available to others who request them. If you or your staff have any questions about this report, please contact me on (202) 512-4128. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix VI. To accomplish our objectives, we reviewed documentation and spoke with officials from the Department of State; the Broadcasting Board of Governors (BBG) and its grantees; including the Middle East Broadcasting Networks; Inc. (MBN); Radio Free Europe/Radio Liberty (RFE/RL); Radio Free Asia (RFA); the Voice of America (VOA); and the International Broadcasting Bureau. We also spoke with several outside experts, including representatives from National Public Radio, the InterMedia research organization, Zogby International, and foreign international broadcasters such as the British Broadcasting Corporation (BBC) and Deutsche Welle. In addition to our audit work in the Washington, D.C., area, we visited MBN’s offices in Amman, Jordan, and Dubai, United Arab Emirates. We held telephone interviews with MBN’s current affairs contractor in Beirut, Lebanon, and sent questions to the MBN office in Baghdad, Iraq. Because our work was focused on reviewing the BBG’s MBN initiatives-- Alhurra and Radio Sawa–we did not include RFE/RL Arabic-language media activities in Iraq in the scope of our work. To address our objective of assessing MBN’s internal controls, we used criteria contained in MBN’s grant agreement, OMB Circulars No. A-110 and A-133, GAO published standards on internal controls, and a report on MBN’s internal controls completed by Grant Thornton LLP accountants and management consultants. We also met with other broadcasters, including RFE/RL, RFA, and National Public Radio, to obtain an understanding of their internal controls and to make comparisons with MBN. We held discussions with MBN officials and reviewed documentation provided by them to determine whether the organization was complying with appropriate criteria. We spoke with PriceWaterhouseCoopers officials responsible for the 2003, 2004, and 2005 single audits to obtain their perspectives on the status of problems cited in the Grant Thornton LLP report and to learn about any new issues arising from their audit work. We did not test MBN’s controls, since doing so would have been beyond the scope of our review. To address our objective of assessing the procedures MBN has developed to ensure it complies with its journalistic standards, we spoke with other broadcasters, including RFE/RL, RFA, VOA, and National Public Radio, to compare their editorial standards and procedures with those of MBN. We also met with the BBG general counsel, as well as MBN producers, editors, and journalists. To better understand the program review process, we attended several VOA program reviews. We also held detailed discussions and obtained documentation on program review from the BBG officials who conducted Radio Sawa’s review, the International Broadcasting Bureau office of performance review, and the research directors of RFA and RFE/RL. We did not test MBN’s editorial procedures. To address our objective of determining the extent to which the BBG met or exceeded its fiscal year 2005 performance MBN targets, we reviewed the reliability of BBG audience research that was used to generate the estimates of credibility and audience size contained in the BBG’s Fiscal Year 2007 Budget Request, which was the BBG’s most up-to-date source of officially-reported GPRA performance information for MBN for fiscal year 2005. We did not review the inputs used to generate performance information for fiscal years other than 2005, nor did we review the reliability of the data used to develop other MBN performance indictors reported in the 2007 Budget Request. To conduct our assessment of the reliability of BBG estimates of credibility and audience size for fiscal year 2005, we reviewed available documentation provided as of May 2005 against applicable standards and common professional research practices, including Conference of International Broadcasters’ Audience Research Services (CIBAR) guidelines and the professional standards of the American Association for Public Opinion Research. We reviewed relevant surveys, including technical reports and printouts of the survey results, as well as spreadsheets used to calculate the performance indicator estimates. We also conducted a series of interviews and corresponded extensively with the BBG’s coordinator for performance planning and research, the head of the International Broadcasting Bureau office of research, and officials from Intermedia to discuss the survey methods and how performance indicator estimates were generated. In addition, we requested information from the BBG’s research subcontractors, including AC Nielsen and D3 Systems, to obtain details on the survey sampling plans and related assumptions, response rates, and adjustments applied to the data to reflect the effects of the survey design. As a result of our review, we determined that MBN’s reported audience size and credibility estimates are not statistically reliable and are rather rough estimates of performance. Radio Sawa currently has seven programming streams, with the Iraq programming stream airing more news and features than the other streams. We reviewed one week of programming and found that the Iraq stream aired about 50 hours of news and features a week, while five other streams aired about 40 hours a week (see table 5). Alhurra’s programming focuses on news and information, including hourly news updates, daily hour-long newscasts, and current affairs talk shows (see table 6). In addition, Alhurra broadcasts current affairs shows on subjects including health and fitness, entertainment, sports, and science and technology. Alhurra also airs documentaries on a diverse range of topics designed to appeal to a broad audience. In a sample week, Alhurra’s acquired programming, which mainly includes documentaries, accounted for the largest number of hours aired, about 35 percent of the total weekly programming hours--or approximately 60 hours in a week. News and news updates amounted to 22 percent of weekly programming hours. In contrast, Alhurra-Iraq’s talk shows accounted for the largest number of hours aired--- about 46 hours, or 27 percent of total programming time (see table 7). The BBG initiated the Radio Sawa review because MBN did not have the in- house capacity to do so, according to a BBG official. Moreover, both the BBG and MBN wanted to ensure that the review was conducted by an entity other than MBN to make it more independent. To plan and conduct this review, BBG officials reported that they held preliminary discussions with the International Broadcasting Bureau’s office of performance review, hired an outside expert to serve as a consultant to the project, and contracted with the InterMedia research organization to convene panels of regular listeners in Iraq, Morocco, Egypt, and Jordan. According to representatives of InterMedia, they recruited a large number of panelists because the BBG wanted more input than usual. Panelists listened to and evaluated 6 days worth of programming, responded to a questionnaire that evaluated all of Radio Sawa’s broadcast criteria, answered open-ended questions, provided detailed feedback, and made recommendations about the programming. Overall, the results of Radio Sawa’s monitoring panels were generally positive, with an overall program quality score of 2.9 out of 4, which is considered “good” by the BBG. To follow up on the program review, MBN is attempting to implement action points drafted by the BBG after the review, such as reducing abrupt transitions between music and news and localizing Radio Sawa’s streams. The BBG told us it placed great emphasis on the audience’s response, noting that if there had been any problems, they would have heard about them from the listeners. The BBG’s research contractor concurred that it is important to obtain the audience’s perspective, since the audience can judge programming in the context of the alternatives available to them in the local media market. According to a BBG official, it was decided that information obtained from the audience monitoring panel would capture the issues that would normally be covered by the internal analysts in the content and production analyses. However, even a detailed, well-thought-out questionnaire does not replace the different perspectives internal and external control reviewers bring to the review, according to a grantee official. According to an International Broadcasting Bureau official, even though the three components of a program review–audience panels and internal and external reviewers–use nearly the same criteria, they provide different insights into the program’s quality, direction, and context. According to several media experts, while audience panelists can evaluate news and information from their own perspective, they are less qualified to evaluate a service’s mission or judge the extent to which a show complies with journalistic standards. In particular, the absence of an internal review meant that Radio Sawa’s programming was not evaluated by people familiar with BBG and MBN standards and controls. Moreover, the external control listener’s evaluation is intended to give the service a sense of what the panel’s responses would be like if the environment were freer and more open, information that a monitoring panel cannot provide. The BBG told us that it did not believe an internal evaluation was necessary because the English translations of each of the panel reports enabled them to examine and assess MBN’s programming. The BBG also told us that it did not use external control listeners in the review for several reasons: (1) Radio Sawa’s focus on youth and popular music would make it difficult for a typical older control listener to evaluate Sawa’s programs; (2) the BBG lacked resources to use control listeners for the multiple countries in Radio Sawa’s review; (3) it would have been hard to find impartial listeners; and (4) the four in-country panels ensured enough diversity of opinion so that there was less need for a control reviewer. However, according to an International Broadcasting Bureau official, the VOA is able to successfully evaluate youth shows using its regular internal reviewers. Moreover, it may be possible to use younger control listeners, or to use the older ones selectively. For example, RFE/RL uses a control listener to evaluate the news of Radio Farda. In addition, the challenge of finding impartial control listeners is not unique to the Middle East, but is potentially present to some degree for every language service. The Conference of International Broadcasters’ Audience Research Services (CIBAR) guidelines were developed by an international group of broadcasters – including the BBG’s International Broadcasting Bureau -- to encourage, among other things, the appropriate use of audience and market data for decision-making within international broadcasting organizations, the establishment and maintenance of standards in international audience research, and the efficient and responsible use and application within member organizations of all forms of audience feedback. By providing a context for international audience research and a clear statement of the minimum standards required, the guidelines attempt to address the challenges faced by international broadcasters, including the tension between the needs of proper audience measurement and survey practice and the conditions and expertise in many of the countries where the research is carried out. The third edition of the guidelines, published in 2001, covers the nature of international audience research measures; survey design, sampling, and fieldwork; and data management and reporting. Specific guidelines relevant to this report include the following: 4.1: Samples and method: The basis for audience measurement should be samples based on the principles of random probability. The principles of random selection should be applied at all stages of the sampling process, from initial sampling point to selection of individuals. Quota samples should be used only in cases where, for reasons of practicality, cost, or both, proper random samples cannot be used. When quota samples are used, this should be clearly labeled in the reporting and documentation of the data. It should be a requirement of all measurement research that the sampling methods used and other technical aspects of the survey, including margins of error and confidence levels where appropriate, be both fully and accurately described in the project documentation, and open to independent scrutiny. 4.2: Survey coverage: Where certain groups are disproportionately sampled and weighting techniques are applied at the analysis stage to correct for this, project documentation should give full details of the weights applied. 5.2: Data reporting: The research agency should provide the following information to a Background information – client for whom the study was conducted; purpose of the study; names of subcontractors and consultants performing any substantial part of the work. Intended and achieved sample -- universe covered; statistics used (e.g. census data; size, nature and geographical distribution of the sample; sampling method and weighting methods used; response rates and possible bias due to non-response). Data collection – method of collection; field staff; briefing and field quality control; method of recruiting respondents; and fieldwork dates. Presentation of results – relevant factual findings obtained; bases of percentages (weighted and unweighted); margins of error; and questionnaire and other documents used. Proper care be exercised when reporting such estimates, to ensure that the type of audience covered by the estimate is clearly stated. At all stages in the calculation and extrapolation process, it is vital that sources, assumptions, and methods be fully documented and available for independent scrutiny. All reporting of worldwide and regional audiences should be accompanied by a technical appendix giving details of the sources, assumptions, and measures used. The following are GAO’s comments on the Broadcasting Board of Governor’s (BBG) letter dated July 20, 2006. 1. GAO acknowledges in the report that MBN has participated in long- range planning in coordination with the BBG’s update of its 2008-2012 strategic plan, and that MBN has developed a 2006 annual performance plan as part of the OMB Program Assessment Rating Tool process. However, contrary to the BBG’s comments, the fiscal year 2006 performance plan document for MBN that we obtained did not contain all of the elements of a stand-alone strategic plan. In addition, we were told by the former executive director of the BBG, who is currently the president of MBN, that the Radio Sawa strategic plan drafted in 2002 by the BBG–which the BBG stated in its comments contributed to its considerable long-term planning for MBN--was never adopted by the BBG. We were also told by MBN’s former president that he had never received a copy of the draft Radio Sawa strategic plan. 2. The BBG stated that MBN’s Single Audit evaluates the effectiveness of its internal control program. However, while MBN’s Single Audits address the fair presentation of the organization’s financial statements, its compliance with certain laws and regulations, and the organization’s internal control as it relates to financial reporting, MBN’s 2005 Single Audit as prepared by PriceWaterhouseCoopers does not–contrary to what the BBG stated in its comments--offer an opinion on the overall effectiveness of MBN’s internal control. Thus, the unqualified opinion that MBN received on its fiscal year 2005 Single Audit does not provide a broad assessment or opinion of MBN’s internal control system. For the purposes of our report, it was therefore necessary to consider other assessments of MBN’s financial and administrative controls, including the Grant Thornton report. 3. MBN’s grant agreement states that MBN should make every reasonable effort to achieve the purpose of the grant in accordance with OMB Circulars A-110, A-122, and A-133. Our report did not attempt to make an assessment of MBN’s compliance with its grant agreement, but rather focused on the progress MBN has made in developing its system of internal control and the ways in which MBN can continue to strengthen this system. To that end, we drew upon the work of Grant Thornton LLP, which assessed the status of MBN’s controls against relevant OMB Circulars and GAO guidance on internal control. 4. While the Grant Thornton recommendations are not obligatory, they are based on best management practices from OMB and GAO. According to officials, both the BBG and MBN accepted the results of the report, which provides detailed insight into MBN’s financial operations, when it was completed in May 2004. Adopting the report’s remaining recommendations would help MBN in its efforts to build a fully mature internal financial management operation. 5. The BBG stated that it believes its standards yield data of sufficient reliability to allow it to estimate MBN’s audiences and credibility. However, the BBG offers no evidence for this. BBG’s estimates for MBN are based on judgment and not statistics, and its current methods do not and cannot estimate the error in its estimates. As noted in the report, in several instances the BBG and its contractors departed from CIBAR research standards. For example, we analyzed 12 of the 14 country surveys conducted by BBG contractors for 2005. In the BBG’s fiscal year 2007 budget request, the BBG reported the results of each country survey and an overall estimate for audience size of 21.5 for Alhurra and 21.6 million for Radio Sawa. However, our analysis of the 12 surveys identified a number of methodological weaknesses, including BBG’s failure to fully document research methods, measure the level of uncertainty surrounding its estimates, disclose significant limitations, limit the scope of its projections to areas actually covered by its surveys, and develop and consistently implement policies and procedures for verifying data. These limitations were not reported in BBG’s audience survey estimates and prevent us from concluding that the estimates are accurate and reliable. 6. We acknowledge in the report that there are challenges to conducting audience research in the Middle East, and that there are tradeoffs between cost and data reliability when conducting research. 7. As we noted in the report, it is difficult to conduct probability sampling in many locations in the Middle East. But, it is not impossible. Organizations, including the United States Census Bureau, have collaborated on probability surveys in the Middle East, including in Saudi Arabia and Jordan. The major problem of the BBG estimates is their lack of transparency, and the lack of an explanation for the methodology behind BBG’s estimates. 8. In conducting our data reliability assessment, as referenced in the report, we largely used the international audience research guidelines published by CIBAR. The BBG participated in drafting these standards, which are specified as a source of guidance for BBG research contractors. CIBAR standards recommend that proper care be exercised when reporting estimates to ensure that the type of audience covered by the estimate is clearly stated, and that, at all stages in the calculation and extrapolation process, sources, assumptions, and methods are fully documented and available for independent scrutiny. As noted in the report, in several instances the BBG and its contractors departed from CIBAR research standards, leading to weaknesses in MBN’s methodology and documentation. As a result, we were unable to determine the accuracy of MBN’s fiscal year 2005 audience size and program credibility estimates. 9. The BBG stated that it believes the level of methodological detail it has provided in public documents is comparable with that typically used by other research organizations for studies in the Islamic world; therefore, it believes it is consistent with industry practice. However, the Reports Consolidation Act of 2000 requires federal agencies, such as the BBG, to assess the completeness and reliability of the performance data in their performance reports and to discuss any material inadequacies in the completeness and reliability of their performance data, as well as actions to address the inadequacies. In performance reporting such as the BBG fiscal year 2007 budget request, the BBG reported the results of its research for MBN as 21.5 million viewers for Alhurra and 21.6 million for Radio Sawa. However, the BBG did not sufficiently explain the specific methods used for generating estimates for its performance indicators, such as the number and names of the countries surveyed (including the sizes of its samples), methods of sampling used, sources of the population data, and basic procedures used to create the estimates. Moreover, the BBG reported only two limitations for MBN performance indicators to date: (1) that credibility ratings are highly dependent on volatile political factors; and (2) that, depending on political, social, and media conditions, measurement of audience size may either be easily attained or impossible. Significant limitations not reported are that many of the BBG surveys are not based on probability sampling, cover only part of the country, have very low response rates, or have high substitution rates. Therefore, the BBG does not provide a full level of confidence in the credibility of its performance data for MBN. In contrast, the Department of Transportation, has a separate compendiums available online that provides source and accuracy statements, which provide detail on the methods used to collect performance data, sources of variation and bias in the data, and methods used to verify and validate the data. 10. The BBG cannot calculate the sampling error for the BBG estimates of audience size and credibility because the probability of selection is not known. Although increased sample size will generally decrease sampling error, it is impossible to accurately estimate the sampling error of the BBG surveys because they are nonprobability surveys. As the BBG has stated, its use of a simple random sampling formula when calculating sampling errors for its surveys underestimates the sampling error. This formula is not appropriate for the sample designs used by MBN. 11. Although our recommendation asks the BBG to require research contractors to improve the methods used in audience research to allow for probability sampling and document the sample selection so that survey sampling errors can be calculated, where possible, it is the responsibility of the BBG to explain and justify the need to conduct nonprobability samples. Diana Glod, Melissa Pickworth, Eve Weisberg, Dorian Herring, and Joe Carney made key contributions to this report. Chanetta Reed, Jay Smale, Karen O’Conor, and Jackie Nowicki provided technical assistance.
The Broadcasting Board of Governors' (BBG) broadcasting services, Radio Sawa, and the Alhurra satellite television networks--collectively known as the Middle East Broadcasting Networks, Inc. (MBN)--currently aim to reach Arabic speakers in 19 countries and areas throughout the Middle East. Annual spending for current activities amounts to about $78 million. GAO reviewed MBN's (1) strategic planning to address competition in the Middle Eastern media market, (2) implementation of internal control, (3) procedures MBN has developed to ensure compliance with its journalistic standards, and (4) performance indicators and whether targets have been met. MBN faces a number of competitive challenges in carrying out its mission of broadcasting in the Middle Eastern media market and has taken some steps to address them. However, MBN lacks a comprehensive, long-term strategic plan. As MBN emerges from its start-up mode and faces future challenges, a long-term strategic plan will be important. While MBN has developed financial and administrative controls to manage and safeguard its financial resources, it could take additional steps to strengthen its system of internal control. For example, the MBN has not (1) convened a meeting of its internal control board to formally develop its controls and coordinate audits, (2) completed an internal control plan, (3) completed a risk assessment to address potential risks to its operation, or (4) developed a comprehensive training program for its staff. MBN has procedures in place to help ensure its programming meets its journalistic standards. However, MBN lacks regular editorial training and has not fully implemented a comprehensive, regular program review process to determine whether its programming complies with those standards or with MBN's mission. While the BBG calls for its broadcasters to undergo an annual program review, Radio Sawa has only held one such review, and Alhurra has not completed one to date. The BBG has developed several performance indicators and targets for MBN's Radio Sawa and Alhurra services, including measures of audience size and program credibility. However, it is not clear whether the Radio Sawa and Alhurra performance targets have been met because of weaknesses in MBN's survey methodology and documentation. The BBG did not take certain steps that could have explained and increased the reliability of its estimates, such as by fully documenting its research and estimation methods, measuring the level of uncertainty surrounding its estimates, disclosing significant limitations, and consistently implementing policies and procedures for verifying data.
The Workforce Investment Act of 1998 requires states and localities to bring together about 17 federally funded employment and training services into a single system—the one-stop system. Funded through four federal agencies, these programs, also known as the mandatory partner programs (or more simply, mandatory partners), are to provide services through a statewide network of one-stop career centers. (See table 1.) Three of these 17 programs, which were created and funded by Title I of WIA to provide services to adults, dislocated workers, and youth, replace those previously funded under the Job Training Partnership Act (JTPA). The Department of Labor distributes funds for these three programs to the states, and the states in turn distribute funds to designated local areas within the states based on formulas prescribed by WIA. WIA also established performance measures that states and localities must track in order to demonstrate the programs’ effectiveness. The performance measures primarily focus on entered employment rates, employment retention rates, earnings changes, and credential rates. WIA programs provide for three levels of services for adults and dislocated workers: core, intensive, and training. Core services include basic services such as job search and labor market information. These activities may be self-service or may require some staff assistance. Intensive services include such activities as comprehensive assessment and case management, which require greater staff involvement. Training services include such activities as occupational skills training or on-the-job training. WIA requires the establishment of workforce investment boards at the state level and in local areas. The state boards are responsible for a number of functions, including the development and improvement of the statewide workforce investment system and the designation of local areas. The state board assists in the preparation of the state plan and the annual report, both of which are submitted to the Secretary of Labor. The local workforce investment board sets policy for the local area, and its specific duties include developing a comprehensive 5-year local plan and selecting one-stop operators. WIA contains a number of provisions to ensure that individuals with disabilities are adequately served. The most important of these provisions is Section 188, which prohibits any program or activity funded or otherwise financially assisted in whole or part under WIA from discriminating on the basis of disability as well as race, color, religion, sex, national origin, age, or political affiliation or belief. To help states and local areas implement the Section 188 provisions, the Department of Labor issued interim final regulations in November 1999. These regulations, which have the force of law, describe requirements for the recipients of financial assistance under WIA Title I, and for programs and activities operated by the one-stop partners as part of the one-stop system. The regulations also identify how recipients will be held accountable for ensuring nondiscrimination and equal opportunity for individuals with disabilities. The WIA Section 188 regulations contain certain provisions that prohibit recipients of WIA financial assistance from taking certain discriminatory actions. For example, recipients must not: provide significant assistance to a person or entity that discriminates in providing any aid, benefits, services, or training to registrants, applicants, or participants; make a selection for the site or location of a facility that has a impose or apply eligibility criteria that screen out or tend to screen out individuals with disabilities, unless such criteria are necessary for the provision of the aid, benefit, service, training, program, or activity being offered. Further, WIA Section 188 regulations contain provisions that oblige recipients to take certain positive actions to provide comprehensive access to WIA programs and services. For example, these regulations require some recipients of WIA financial assistance—those who are in facilities or parts of facilities that are constructed or altered on their behalf—to make those facilities architecturally accessible. In contrast, recipients of WIA financial assistance who are in unaltered existing facilities are not necessarily required to make those facilities architecturally accessible, but are subject to other requirements for accessibility, known as program access, which specify that a recipient must operate each service, program, or activity so that it, when viewed in its entirety, is readily accessible to and usable by individuals with disabilities. Recipients of WIA financial assistance do not have to make each of their existing facilities or every part of an existing facility accessible to and usable by individuals with disabilities, and can satisfy the accessibility requirements for existing facilities by redesigning equipment, reassigning services to accessible buildings, assigning aides to beneficiaries, and providing home visits, among other options. As part of providing comprehensive access, WIA Section 188 regulations require recipients of WIA financial assistance to take a number of additional actions when administering their programs or activities. Under these provisions, recipients must: take steps to ensure that communications with individuals with disabilities are as effective as communications with others, including providing appropriate auxiliary aids and services where necessary; provide reasonable accommodation to qualified individuals with disabilities who are applicants, registrants, or eligible applicants/registrants for, or participants in, employees of, or applicants for, employment with their programs and activities, unless providing the accommodation would cause undue hardship; make reasonable modifications in policies, practices, or procedures, unless making the modifications would fundamentally alter the nature of the service, program, or activity; provide the most integrated setting appropriate to the needs of qualified individuals with disabilities; and take appropriate steps, such as advertising and marketing, to ensure that they are providing universal access to their WIA financially assisted programs and activities. The regulations also require recipients of WIA financial assistance to establish an administrative structure so that they ensure compliance with WIA’s nondiscrimination and equal opportunity provisions. Each recipient, except small recipients and service providers, must designate an equal opportunity (EO) officer who is responsible for ensuring that the recipient complies with Section 188 regulations. EO officers’ responsibilities include: monitoring and investigating activities by recipients of WIA financial assistance to ensure that they do not violate WIA Section 188 regulations, reviewing written policies to ensure that those policies are nondiscriminatory, and developing and publishing the recipient’s procedures for processing discrimination complaints. Recipients of WIA financial assistance must also provide written notification that they do not discriminate on the basis of disability or on other prohibited bases. This notification must be placed prominently in the facility and distributed through other means. In addition, recipients of WIA financial assistance must collect and maintain data necessary to allow Labor to determine whether the recipient is complying with Section 188 of WIA and the implementing regulations. The Director of Labor’s Civil Rights Center determines which data are necessary. Under WIA Section 188 regulations, the governor of each state is responsible for, among other things: oversight of all WIA financially assisted state programs, ensuring compliance with WIA Section 188 and its implementing regulations, and negotiating with recipients to secure voluntary compliance when noncompliance is found. Moreover, both the Governor and the recipient of WIA financial assistance are liable for all violations of Section 188 unless the Governor has, among other things, established, signed, and adhered to an MOA. The MOA must be in writing and describe how the state programs and recipients of WIA financial assistance have satisfied the requirements of certain regulatory provisions, including those regarding people with disabilities. In addition, the Director of Labor’s CRC has oversight responsibilities under the Section 188 regulations, which include: reviewing the activities of a governor, including the adequacy of the MOA, and investigating and resolving complaints alleging violations of Section 188. As part of its oversight responsibility, CRC, with assistance from ETA and ODEP, issued a compliance checklist on July 25, 2003, to ensure nondiscrimination and equal opportunity to persons with disabilities participating in WIA programs and activities. This checklist, officially known as the WIA Section 188 Disability Checklist, identifies the regulations implementing Section 188 of WIA, including portions of the regulations implementing Section 504 of the Rehabilitation Act, and covers requirements applicable to local area grant recipients regarding the operation of their programs and activities. The checklist is based on the elements required by the MOA and includes lists of questions for each element of the MOA. For some of the elements, the questions are followed by examples of concrete actions that can be taken to ensure compliance with Section 188 requirements. The appendix to the Checklist also includes additional examples of policies, procedures, and other steps that local area grant recipients can take to ensure compliance with Section 188. Labor has awarded grants to facilitate comprehensive access to employment and training programs for persons with disabilities, and local areas and one-stop centers have also made numerous efforts, as well as various degrees of progress, in facilitating comprehensive access to their programs and services. Specifically, ETA has awarded over 100 grants to states and local entities for disability-related activities, such as enhancing comprehensive access to the one-stops. States and local areas have used these grants for a range of efforts, including assessing one-stop architectural accessibility, acquiring assistive technology devices, and increasing staff capacity to provide services to persons with disabilities. Between 2000 and 2004, ETA awarded state and local entities a total of approximately $65 million in competitive Work Incentive Grants in order to enhance one-stops’ capacity to provide programs and services to persons with disabilities, which included improving one-stop accessibility. ETA awarded 113 grants in four rounds between 2000 and 2004. (See table 2.) On the basis of its experience administering the first two rounds of grants, ETA has targeted its specific grant objectives—and, therefore, its resources—to meet the emerging needs that states, local areas, and one- stops have identified in providing programs and services to persons with disabilities. ETA’s objectives for the early rounds of grants were relatively broad, and as a result, grantees were permitted to use the funds to undertake a range of activities, including: assessing one-stops’ architectural accessibility; acquiring assistive technology devices; conducting outreach to the disability community; linking and coordinating with community disability-related agencies, such as community mental health agencies and independent living centers; training existing one-stop staff on disability issues; and making available staff who have the experience, knowledge, and skills necessary to address a broad range of disability-related issues. By the third round of grants, in 2003, ETA had begun to focus its priorities more narrowly—though not exclusively—on increasing the capacity of one-stop staff to provide services to persons with disabilities. According to the third round grant notice, previous grantees had found that building staff capacity was successful in improving overall service delivery in their one-stops. ETA officials said that although they believe that building staff capacity will enhance one-stops’ progress toward making their services available to persons with disabilities, they recognize that some one-stops may also still need to address other issues, such as meeting the architectural access requirements. In addition to targeting their grant objectives, ETA officials said they plan to change the process by which they award grants. ETA used a competitive process to award all four rounds of grants, and as a result, according to ETA officials, some states or local areas that needed grants may not have received them. ETA officials said they plan to use a different process in the future, which would allow them to target funding toward specific areas, such as states that did not receive grants in the first four rounds and/or states where they would like to intensify current grant activities. ETA, in conjunction with the Social Security Administration (SSA), which administers employment support programs for its disability beneficiaries, has provided approximately $24 million to fund a demonstration project focused on the establishment and training of one-stop Disability Program Navigators. The Navigators’ role is to address the needs of persons with disabilities seeking to use the one-stop system. Since July 2003, Navigator grants have been awarded in a total of 17 states. At the time of our review, this initiative had led to 221 Disability Program Navigators working in or with one-stops in those states. As designed by ETA and SSA, in collaboration with ODEP, Navigators are to provide expertise and serve as a resource to one-stops as well as persons with disabilities. ETA and SSA expected that Navigators would, in part, carry out many of the same types of accessibility-related activities that were funded under the initial Work Incentive Grants. The third and fourth rounds of the Work Incentive Grants have led to the hiring of staff who can perform functions similar to those of a Navigator. At the time of our review, 122 Navigator-like staff had been established through the Work Incentive Grants. Eleven of the sites we visited had either Disability Program Navigators or Work Incentive Grant Navigators. Some of the Navigators we interviewed told us they had the following job responsibilities: providing disability-related staff training; helping staff locate resources for specific persons with disabilities, such as accommodations or services in the community; developing relationships with disability-related service providers, such as VR and other community agencies; and helping to ensure the accessibility of the one-stop, such as by conducting accessibility assessments or developing accessibility plans. During our site visits, we found that local areas and one-stop centers had made various efforts and degrees of progress in facilitating comprehensive access to the one-stops’ programs and services. Specifically, we found the following: Architectural access. Our site visits showed that most local area and one-stop officials were working to implement architectural access standards, which are required by the WIA Section 188 regulations. Nearly all of the sites we visited had undergone at least one architectural accessibility assessment within the last few years, and the assessments were typically conducted by VR or other disability-related agencies. Our review of these assessments showed that there were often considerable differences in the degree of architectural access that the locations had achieved. For example, some of the sites had either no or few problems with regard to architectural access. Other locations had a number of access-related problems, including those related to parking, ramps, and doors, as well as restrooms and signage. Some officials at these locations told us they had made at least some changes to improve architectural access. For example, some changes included: adding or changing accessible parking spaces to meet requirements; installing signage or changing existing signage to meet requirements; building a new exterior ramp because the existing one did not meet architectural access requirements; and installing electric door openers. Auxiliary aids and services. Many of the one-stops we visited had acquired auxiliary aids and services, such as assistive technology and materials in alternate formats, which the WIA Section 188 regulations require that one-stops provide to persons with disabilities when necessary. Auxiliary aids and services include a range of devices, equipment, and services that provide effective communication for persons with various types of impairments. According to ETA, the auxiliary aids and services requirement covers any method of communication, including verbal, written, computer-based, or telephone communications. Assistive technology refers to products or equipment that can be used to help people with disabilities perform their major life functions. Some types of assistive technology can be used to make existing information technology, including computers and telephones, available to persons with disabilities. Alternate formats can, for instance, make written or visual materials available to persons with visual impairments or make oral information available to persons with hearing impairments. Table 3 describes selected types of auxiliary aids and services that were available in some of the one- stops we visited. At the time of our site visits, a few one-stops had either recently installed assistive technology for the first time or were still in the process of acquiring it. However, other sites had assistive technology, and some or all of the staff had already received training in how to use it. Some of these sites offered a range of devices, which could assist many types of impairments. Given the wide variety of devices available, some local areas and one-stops targeted their resources, at least initially, toward items that might be used frequently. For example, one local area—working with an agency that had assistive technology expertise—collected data on the types of impairments that were most prevalent among potential customers and then used these data to determine which devices to purchase first. In addition, a couple of officials said that their one-stops had some materials, such as basic orientation materials, routinely available in Braille or large- print formats. Some officials told us that they did not have any of their materials routinely available in alternate formats, although they would provide these to customers upon request. In some cases, officials said that they could rapidly provide customers with certain types of alternate formats, such as Braille, large print, computer diskette, or compact disk, through the use of their assistive technology or computers. Reasonable accommodations. Some officials and staff we interviewed said they try to make reasonable accommodations for persons with disabilities. Reasonable accommodations, which are required by the WIA Section 188 regulations, enable persons with disabilities to receive aid, benefits, services, or training equal to that provided to persons without disabilities. For example, a number of officials and staff mentioned that although they did not have a qualified American Sign Language interpreter on-site at their one-stops, they have obtained an interpreter upon a customer’s request. However, during our site visits, we also found that local area and one-stops’ policies and procedures for providing reasonable accommodations varied. For example, officials from a few local areas and one-stops said they referred to their state workforce agency’s or their local government’s policies for guidance on this issue. A few officials said that they had developed their own local accommodation policies or procedures, or planned to do so. For example, one local area developed written policies and procedures that provided information on how customers should request an accommodation, which staff could assist in providing a reasonable accommodation, and which staff were responsible for determining if the one-stop is able to provide the accommodation. In addition, some officials told us that when they have received accommodation requests, they have not maintained records on the types of accommodations requested or whether the one-stop provided these accommodations. However, in at least one of the local areas we visited, the local equal opportunity officer—who addressed all accommodation requests—said that he maintained records on this information. Integrated settings. During our site visits, we found variation in viewpoints regarding the practice of automatically referring persons with disabilities to VR for services. Even though agencies such as VR could provide services to persons with disabilities, the WIA Section 188 regulations require that one-stops allow persons with disabilities the opportunity to receive services in the most integrated setting appropriate to meet their needs. An integrated setting is one that enables persons with disabilities to interact with persons without disabilities. Although a referral to VR may be appropriate for some individuals, automatically referring all persons with disabilities to VR does not allow for the opportunity to receive services along with persons without disabilities. Moreover, an automatic referral to VR does not provide customers with an individualized assessment of their abilities and needs. Some local area and one-stop officials we interviewed acknowledged that automatic referrals to VR did occur in the past. However, a number of officials and staff understood that this practice is not appropriate, or said that it is not currently occurring in their one-stops. Some of these officials and staff said that services for persons with disabilities are determined on a case-by-case basis and that unless these individuals want or indicate that they need VR services, they are not referred to VR. Some WIA officials, as well as a few VR officials and others who have provided staff training on disability issues, explained that one-stop staff have been trained not to automatically refer persons with disabilities to VR. For example, staff were trained not to stereotype persons with disabilities or assume that they need VR services, or were trained to provide these customers with a choice regarding which services they use. However, during our site visits, officials in two local areas told us that they currently found it preferable or necessary to automatically refer persons with disabilities to VR. Officials from one local area stated that while a disability-related agency advised them that one-stop staff should not be automatically referring persons with disabilities to VR, they took exception to this guidance. The local area officials explained that it would be irresponsible of them not to fully utilize the expertise of the only mandatory disability partner in the WIA system. Officials from another local area said that although their long-term goal is to train one-stop staff to work directly with persons with disabilities, they believe that their one- stop staff are currently referring these customers to VR. Additionally, some WIA, VR, and disability-related agency officials also expressed concerns that trying to meet performance standards could provide an incentive for one-stops to automatically refer persons with disabilities to VR, only serve those with the least severe disabilities, or not serve them at all. Some officials explained that it is sometimes more difficult for persons with disabilities, particularly those with more severe disabilities, to find and retain jobs, and that it is often more costly for the one-stop to serve these individuals. Marketing and outreach. Some officials and staff we interviewed cited a variety of reasons why marketing the one-stops’ services and conducting outreach to persons with disabilities, which are activities required by the WIA Section 188 regulations, may be important. One of the reasons cited was that many individuals in the community, including those with disabilities, were still not aware of the types of programs and services that one-stops offer. For example, a one-stop official said that one-stops are often thought of as an employment service, without recognition that they can offer participants education, referrals to disability-related agencies for services, and other assistance. Some WIA officials and disability-related agency representatives also said that even when the disability community knows what the one-stops offer, the one-stops often have to overcome the belief that one-stops do not want to, or are not capable of, providing services to persons with disabilities. For example, the disability community may believe that the one-stops do not have assistive technology or provide other assistance to persons with disabilities. Additionally, some officials also stated that they believe that persons with disabilities are still more likely to seek services from disability-related organizations than from one-stops. Some of the officials from local areas and one-stops that had engaged in marketing and outreach efforts said they had used one or more community-based disability organizations in their efforts. For example, local areas or one-stops sometimes approached independent living centers, agencies that serve individuals with specific types of disabilities, or other organizations to inform them about the one-stops’ services and their accessible technology. Some officials also said they used brochures, television or radio ads, billboards, or other means to market their services to persons with disabilities. Other local area and one-stop officials told us about the specialized techniques they used, such as holding a yearly job fair for persons with disabilities, which provides attendees with information about one-stop services. Officials in a few local areas and one-stops, however, stated that they were hesitant to market their services to persons with disabilities. For example, one local area official was not confident about the ability of some one-stop staff to handle disability issues and, as a result, did not want to market what the one-stops in the area could not provide. An official in another local area expressed a similar viewpoint with regard to the lack of marketing around an assistive technology device that had not been used. The official stated that the local area had not advertised the device because he did not believe the one-stops in that area were fully capable of providing services to persons with disabilities. Staff training. Although the WIA Section 188 regulations do not specifically require that one-stop staff, other than the equal opportunity officer and his or her staff, receive training on disability, the WIA Section 188 Checklist includes training as one example of how one-stops can ensure compliance with WIA’s comprehensive access requirements. One- stop staff in the majority of the local areas we visited had received some disability-related information and training, but the range of topics covered varied across sites. For example, officials in at least one local area told us that they were still focusing on providing staff with disability awareness training, while officials, staff, and staff training providers in other locations described a wider range of training topics, such as: disability awareness or sensitivity training; types of services that VR provides, and the agency’s eligibility rules and criteria; types of disability-related agencies in the community, as well as who they serve, the types of services they offer, and their contact information; how to identify certain disabilities, including hidden disabilities such as mental illness or learning disabilities; and WIA Section 188 training. We also found that a few local areas and one-stops created comprehensive training programs or targeted their training to identified staff needs. For example, one local area created an extensive disability-training program that provides online and in-class training on a range of relevant disability- related issues and discusses these issues in the context of particular disabilities. This training program has been made available on a statewide basis. Also, in one state, staff at the three one-stops we visited had undergone, or were scheduled to undergo, an assessment of their training needs. These assessments were then going to be used to develop training plans for each of these one-stops. Some officials and staff stated that the available disability-related staff training was beneficial and provided positive outcomes. For example, some officials and staff said that the available training made staff more comfortable interacting with, and providing services to, persons with disabilities and helped them learn about the range of disability-related services that VR and other agencies in the community offer. However, other officials and staff expressed some concerns about the available training. For example, a few of these officials and staff said that they would like training on specific disability-related topics to be available, and in at least one case, local area and one-stop officials had concerns about how well their limited training prepared staff for providing services to persons with disabilities. Additionally, some of the officials, staff, and staff training providers we interviewed said that their training efforts were affected by high staff turnover and the prospect of staff forgetting the information learned in training if it is not used very often. Some officials, staff, and staff training providers said that offering ongoing training was important for these reasons or that they would like ongoing training to be available in their one-stops. One-stops, VR, and other disability-related agencies in the community have formed various relationships to provide services to persons with disabilities. From our site visits, we found that the structure of the one- stops’ relationships with VR varied, particularly in terms of whether co-location was occurring. While most of the one-stops we visited had VR staff on-site at least part of the time, four of the sites we visited had no on-site VR staff. Table 4 shows the co-location status of VR staff at the one-stops we visited. Officials from the sites at which full- or part-time co-location of VR staff was taking place said that co-location was beneficial for a variety of reasons. For example, some WIA and VR officials said that co-location itself helped the one-stop staff provide faster and less fragmented services to persons with disabilities because, when the one-stop staff made referrals to VR, they did not have to send customers off-site. A few officials also stated that co-location facilitated information sharing and helped build relationships between the staff in the two agencies. The reasons for VR staff not being on-site also varied, and included a lack of space in the one-stop, the inability of VR to break its lease at an existing local office, and lack of an interface between the one-stops’ and VR’s computer systems. The one-stops we visited also varied in terms of the extent to which they formed relationships with disability-related service providers other than VR. Although VR has extensive expertise in providing services to persons with disabilities, other disability agencies in the community also have expertise and resources that can benefit one-stops. At the time of our site visits, a few local areas and one-stops were relying primarily on VR and had not formed working relationships with any other disability agencies. However, other local areas and one-stops we visited had formed relationships with one or more disability-related organizations in the community, such as independent living centers, mental health agencies, and cognitive/developmental disability agencies. In at least one instance, a local area formed relationships with agencies that focus on particular impairments. This local area conducted a needs analysis and found that relationships with organizations that provide services to persons with psychiatric impairments, learning disabilities, and substance abuse issues were lacking. As a result, the local area conducted outreach to these types of organizations in order to initiate relationships with them. Officials from local areas and their one-stops, as well as those from VR and community disability agencies, cited a range of benefits to being able to refer their customers to one another for services, when it was appropriate to do so. For example, some WIA and VR officials said the one-stop’s relationship with VR allowed the two agencies to combine their resources to maximize the services they can provide to their customers. For example, for co-enrolled customers, one agency might pay for school tuition while another pays for books. Some local area and one-stop officials also said that referring customers to VR and other community disability agencies is beneficial because those agencies have the ability and funding to provide certain services that the one-stops cannot. In addition, officials in some local areas and one-stops said that VR and other community agencies’ willingness to conduct staff training, provide one- stop accessibility assessments, or participate in one-stop access committees was beneficial. VR and community disability agencies also cited a number of benefits to referring their customers to the one-stops, including access to the one- stops’ career resource centers’ computers and telephones, their workshop or training classes (such as those for computer skills, interview skills, and résumé-writing), and a range of job listings and employer connections broader than their own. VR officials also cited other benefits. For example, when a VR customer is faced with delayed services because VR is waiting for documents substantiating the customer’s disability, the one-stops can provide other services in the interim. Additionally, VR officials told us they find it useful to refer individuals who did not qualify for services through VR, whether because of limited funding or other reason, to the one-stop for services. Labor has taken several actions to ensure that persons with disabilities have comprehensive access to one-stops, including training, monitoring, and enforcement activities, but these efforts may not be sufficient. For example, Labor has not only funded grants, it has also provided training in ways to facilitate comprehensive accessibility in the one-stop centers. Specifically, within Labor, ETA and ODEP, along with SSA, provided Disability Program Navigator training in November 2003 in which successful approaches to ensuring comprehensive access to one-stops were discussed. Additional Disability Program Navigator training was provided in November 2004. Further, CRC, with assistance from ETA and ODEP has provided written guidance and assistance to one-stops on accommodations and other ways to improve comprehensive access for persons with disabilities. Also within Labor, CRC conducts national equal opportunity training annually. Its August 2004 training included topics such as new EO officer orientation, implementing an MOA, ensuring compliance with WIA Section 188, testing and assessment tools for improving services to persons with disabilities, and train-the-trainer EO training. In addition to providing training, CRC is the entity responsible for interpreting, monitoring, and enforcing WIA Section 188 regulations regarding programs receiving financial assistance from Labor, including the applicable comprehensive access and administrative regulatory requirements for one-stop centers. One key method Labor uses to ensure compliance with these regulations has been to require that each state’s governor establish and sign an MOA, which describes and contains supporting documentation of the policies, procedures, and systems that each state has established to ensure compliance. By signing the MOA, and submitting it to CRC, the governor agrees to adhere to its provisions. CRC provides guidance on preparing the MOA, reviews the adequacy of each state MOA submitted, and approves those MOAs that meet its standards. Currently, all governors have submitted MOAs that have been approved by CRC. After initial approval, states are to notify CRC of any updates to their MOAs, and every 2 years Labor requires states to review them and the manner in which they have been implemented, and determine whether their MOAs continue to be effective in ensuring compliance with the requirements of WIA Section 188 and its implementing regulations. In addition, CRC monitors states’ compliance with the nondiscrimination, comprehensive access, and administrative regulatory requirements by conducting on-site technical assistance compliance reviews at selected locations. To facilitate the review process, CRC conducts a 2- to 3-day training session for state, local workforce investment area, and one-stop center staff. In 2003, CRC completed its first phase of on-site training, technical assistance, and compliance reviews in two large metropolitan areas in two states, Miami/Dade County, Florida, and New York, New York. According to Labor’s 2003 Annual Report, CRC focuses its reviews on large metropolitan areas so as to maximize the use of its resources. The annual report notes that the large labor markets in these areas provide the opportunity for gaining a representative picture of the degree of compliance with nondiscrimination and equal opportunity laws and regulations. In both metropolitan areas it reviewed, CRC identified instances of noncompliance, including the existence of barriers limiting services to persons with disabilities. At one of the two metropolitan areas, CRC found significant differences between the disability-related requirements in WIA Section 188 and its implementing regulations and the policies, procedures, and systems that were actually being used. For example, CRC found that the local area had developed a service delivery system in which customers with disabilities were routinely being served by programs or activities that were separate from those used to serve customers without disabilities. Officials at the local area told CRC such a service delivery system had developed in part because there was a general sentiment among disability- related service providers that many of their customers did not feel comfortable in the one-stops. The WIA Section 188 regulations, however, require that services to qualified persons with disabilities be provided in the most integrated settings appropriate to the needs of those customers. Therefore, as noted in CRC’s review, a one-stop center generally should not refer customers with disabilities to a separate program or activity until after it has conducted an individualized assessment of a customer’s needs, and determined that the channels used to serve customers without disabilities cannot provide equally effective aid, benefits, services, or training to persons with disabilities. In addition, the ultimate decision whether to accept the referral to a separate program or activity must be left up to the customer with a disability. If the customer declines to accept the referral, the one-stop must serve the customer with a disability through the same programs or activities used to serve all other customers. In addition, CRC found that the EO officer at the local area in that metropolitan area had not been provided with sufficient staff, other resources, or adequate support from top management to carry out his duties. As a result, staff at the local workforce investment area and one-stops had little understanding of their disability-related or other obligations under WIA Section 188 regulations. At the other metropolitan area reviewed, CRC found that some of the policies, procedures, and systems in the state’s approved MOA had not been fully implemented. For instance, the local workforce investment area had developed an intake eligibility form for use by the one-stops that included questions concerning whether or not the customer had a disability that was or was not a substantial barrier to employment. Frontline staff at the one-stop centers told CRC that all customers were welcome to use self-service and core services. However, CRC found that customers who indicated on the intake form that they had a disability could not receive intensive or training services unless they provided the one-stop with documentation to support their disability, even when disability was not an eligibility criterion to receive such services. CRC found that the use of the intake form, combined with the requirement that customers provide documentation of their disability, unnecessarily screened out people with disabilities from receiving intensive and training services, even though Labor’s WIA Section 188 regulations require that the one-stops must not deny any qualified person with a disability the opportunity to participate in, or benefit from, a WIA-funded program or activity because of that person’s disability. On the basis of its findings, CRC required the state entities responsible for WIA in which the two metropolitan areas were located to provide it with written responses of the corrective actions they planned to make. In addition, in May 2004, the CRC Director requested that all states complete, for themselves and their largest local area, a self-assessment tool to assess compliance with the equal opportunity and nondiscrimination laws and regulations. The self-assessment tool, which provides a structured approach for monitoring compliance, was adapted from the WIA Section 188 Disability Checklist. For each state and its largest local workforce investment area, the self-assessment tool asks whether or not each measure of compliance has been met. For all unmet measures, the self-assessment tool asks for a written explanation of how and when the measure will be met. At the time of our review, CRC was in the process of developing a plan to analyze the qualitative responses they would receive from the states. CRC anticipates using the information provided by these self-assessments and from its on-site reviews to identify exemplary practices as well as areas needing improvement. In addition to the two on-site reviews CRC conducted in 2003, CRC is in the process of conducting two additional reviews in two large metropolitan areas in two other states, which it plans to complete during fiscal year 2005. To date the monitoring and enforcement efforts that have been or are being conducted account for less than 2 percent of the total number of local areas and one-stops nationwide. Moreover, the CRC Director said that she had not yet determined whether CRC would conduct additional on-site reviews. Limited staff and competing work priorities may hinder CRC’s ability to conduct additional reviews. The Director noted that CRC has experienced an erosion in the number of staff since 1998, and she did not foresee any change to this trend in the future. The 44 professional and administrative staff that CRC currently has are responsible for not only all issues involving discrimination in one-stops and other Labor-funded programs, but also for all discrimination issues involving the more than 17,000 employees at Labor. Moreover, the Director explained that these staff are also responsible for addressing other workload priorities, such as issues to improve access to programs and activities for persons who are limited in their English proficiency. Information about the employment outcomes of persons with disabilities is limited by the extent to which disability data are collected and the overall methods used for collecting data for WIA’s performance measures. The three WIA-funded programs—Adult, Dislocated Worker, and Youth— have performance measures established under WIA that states must track and report in order to demonstrate the effectiveness of the programs. These performance measures gauge program results in such areas as job placement, employment retention, earnings changes, and skill attainment. In addition to providing information about all participants in the three WIA-funded programs, Labor also publishes outcome information about certain subpopulations, including veterans, older individuals, and persons with disabilities. The information Labor publishes on the employment outcomes of persons with disabilities, however, is limited for several reasons. One reason is that the information is limited to the subpopulation of persons with disabilities who disclose their disability status, and therefore the employment outcomes may be misleading for the total population of persons with disabilities receiving services through WIA. The WIA Section 188 regulations require one-stops to collect, maintain, and report job seekers’ demographic data—including disability status—to ensure that discrimination is not occurring. Labor has issued guidance stating that one-stops must inquire about disability status from job seekers upon registration for services. Such inquiries must be asked of all job seekers, but an individual’s decision to disclose his or her disability status must be completely voluntary. Even though an individual declines to indicate his or her disability status, the one-stop must still provide services to the individual. Further, the collection of information on employment outcomes, including the information on persons with disabilities, is limited to those persons who are registered for WIA services. Current law does not require job seekers who receive services that are self-service and informational in nature to be included in the performance measures. Labor’s guidance instructs states to register and report on adults and dislocated workers who receive core services that require significant staff assistance designed to help with job seeking or acquiring occupational skills, but states have flexibility in deciding what constitutes significant staff assistance. We have previously reported that most of the one-stop customers who participate in self-directed services, and receive only limited staff assistance, are estimated to be the largest proportion of job seekers under WIA. But since they are not registered for services, they are excluded from the employment outcome data published by Labor. In that report, we also noted that Labor said that it is developing a new reporting system that would enable states to report activity and outcomes for all WIA participants. According to Labor, tracking all one-stop job seekers will enable officials to obtain information about who is served, what services are provided, which partner programs provided the services, and what outcomes are achieved. Finally, the performance measurement system developed under WIA may have a negative effect on the economic outcomes of some people with disabilities because the performance levels may provide a disincentive to serve certain clients, including those with disabilities. Under WIA, performance levels are tied to incentives and sanctions so that states can be financially rewarded if they meet them or penalized if they do not. As such, local areas may be reluctant to provide WIA-funded services to job seekers, including persons with disabilities, who may be less likely than others to find employment or experience an increase in earnings when they are placed in jobs. To address this issue, we recently recommended that the Secretary of Labor develop an adjustment model or other systematic method to account for different populations and local economic conditions when negotiating performance levels. In commenting on our recommendation, Labor agreed with the importance of taking economic conditions and characteristics of the population into account when setting performance expectations and had commissioned a study of adjustment models that could better take these differences into account. The WIA one-stop system’s ability to provide comprehensive access to its programs, services, and activities can affect whether, and how, individuals with disabilities participate in the American workforce. Although Labor has developed specific regulations requiring that people with disabilities have equal opportunity to participate in and benefit from the programs and services offered in the WIA one-stop system, its efforts to date may not be sufficient to ensure that result. Five years after Labor issued regulations implementing the nondiscrimination and equal opportunity provisions of WIA Section 188, the agency has yet to develop and implement a long-term plan for ensuring that the one-stop system complies with the comprehensive access requirements for persons with disabilities. Although CRC, ETA and ODEP have worked together on some comprehensive access projects, they have not developed an overall plan to conduct the activities necessary to ensure comprehensive access to one-stops for all Americans. To improve comprehensive access for persons with disabilities to the one- stop system, we recommend that Labor develop and implement a long- term plan for ensuring that the one-stop system complies with the comprehensive access requirements for people with disabilities. Moreover, in this era of constrained resources, Labor should utilize the expertise of CRC, ETA, and ODEP staff in developing such a plan. We provided a draft of this report to the Departments of Labor and Education for their review and comments. Education did not have comments on our report. Labor generally agreed with our recommendation and said that even more could be done to ensure comprehensive access within the one-stop system. Specifically, ETA has pledged to work with ODEP and CRC to develop and implement a long-term plan for addressing comprehensive access in the one-stop system. ETA also suggested that the development of such a long-term plan should include all of the participating agencies and programs. Moreover, ODEP stated that the comprehensive plan should also address nonspecialized disability supports and services, such as transportation. ODEP and CRC also provided us with some general comments on our report. ODEP noted that, in addition to the WIG and Navigator grants, Labor supports other efforts to facilitate the inclusion of people with disabilities in the one-stop system. Although our report focuses on those grants that are most directly related to facilitating comprehensive access in the one-stop system, we have added examples of some of the types of grants that ODEP has awarded to support employment-related initiatives for people with disabilities. In addition, CRC asked us to clarify our use of the term comprehensive access. CRC expressed some concern that we had included administrative requirements in the use of the term comprehensive access. CRC believed that administrative requirements should not be included as they are not specifically disability-related. We have modified the language in our report to clarify that the administrative requirements are not included in the term comprehensive access. ETA, ODEP, and CRC also provided us with technical comments and clarifications, which we have incorporated as appropriate. Copies of their comments appear in appendix I. We are sending copies of this report to the Secretary of Labor, the Secretary of Education, relevant congressional committees, and others who are interested. Copies will also be made available to others upon request. The report will be available on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Other major contributors to this report are listed in appendix II. William E. Hutchinson and Caterina Pisciotta made significant contributions to all phases of this report. In addition, Jessica Botsford and Richard Burkard provided legal assistance, and Amy Buck assisted in report development. Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help. GAO-04-657. Washington, D.C.: June 1, 2004. Workforce Investment Act: One-Stop Centers Implemented Strategies to Strengthen Services and Partnerships, but More Research and Information Sharing Is Needed. GAO-03-725. Washington, D.C.: June 18, 2003. Workforce Investment Act: Exemplary One-Stops Devised Strategies to Strengthen Services, but Challenges Remain for Reauthorization. GAO-03-884T. Washington, D.C.: June 18, 2003. Workforce Training: Employed Worker Programs Focus on Business Needs, but Revised Performance Measures Could Improve Access for Some Workers. GAO-03-353. Washington, D.C.: February 14, 2003. Older Workers: Employment Assistance Focuses on Subsidized Jobs and Job Search, but Revised Performance Measures Could Improve Access to Other Services. GAO-03-350. Washington, D.C.: January 24, 2003. Workforce Investment Act: States and Localities Increasingly Coordinate Services for TANF Clients, but Better Information Needed on Effective Approaches. GAO-02-696. Washington, D.C.: July 3, 2002. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Voters with Disabilities: Access to Polling Places and Alternative Voting Methods. GAO-02-107. Washington, D.C.: October 15, 2001. Workforce Investment Act: Better Guidance Needed to Address Concerns over New Requirements. GAO-02-72. Washington, D.C.: October 4, 2001. Workforce Investment Act: New Requirements Create Need for More Guidance. GAO-02-94T. Washington, D.C.: October 4, 2001.
The Workforce Investment Act (WIA) of 1998 includes provisions intended to ensure that people with disabilities have equal opportunity to participate in and benefit from the programs and activities offered through one-stop career centers (one-stops). But little is known, and questions have been raised, about how well this system is working for persons with disabilities. This report examines (1) what the Department of Labor (Labor), states, and the one-stops have done to facilitate comprehensive access to the WIA one-stop system; (2) the various relationships that the one-stops have established with disability-related agencies to provide services to persons with disabilities; (3) what Labor has done to ensure that the one-stops are meeting the comprehensive access requirements, and the factors that have affected efforts to ensure compliance; and (4) what is known about the employment outcomes of persons with disabilities who use the one-stop system. Labor has awarded grants to facilitate comprehensive access, which is defined in this report as providing people with disabilities the equal opportunity to participate in and benefit from the programs, activities, and/or employment offered by the WIA one-stop system. States and local areas have used these grants for a range of efforts, including increasing staff capacity to provide services to persons with disabilities. During our site visits to 18 local areas and one-stops, we found that officials at most sites were working to implement architectural access requirements. Moreover, local areas and one-stops varied in the degree to which they had addressed other areas of comprehensive access. For example, a few sites had only begun to acquire assistive technology devices; other sites had assistive technology and had trained some or all of their staff in how to use it. One-stops have established various relationships to provide services to persons with disabilities. The structure of the one-stops' relationships with state vocational rehabilitation (VR) programs varied, as did the extent to which they have formed relationships with disability-related service providers other than VR. A few local areas and one-stops primarily formed relationships with VR, while others had also formed relationships with community-based disability organizations. Although Labor has taken several actions to ensure comprehensive access to one-stops, these efforts may not be sufficient. Labor's Employment and Training Administration (ETA), Civil Rights Center (CRC), and Office of Disability Employment Policy (ODEP) have issued guidance and assistance on the regulatory requirements. CRC also has conducted on-site reviews at local areas and one-stops in two large metropolitan areas in two states. In both areas, CRC identified instances of noncompliance with these requirements. Reviews in two other states will be completed during fiscal year 2005, but Labor has not developed a long-range plan for how it will carry out its oversight and enforcement responsibilities beyond 2005. To date, CRC's monitoring and enforcement efforts account for less than 2 percent of the total number of local areas and one-stops nationwide. The CRC Director stated that she had not yet determined whether CRC would conduct additional on-site reviews. The information that Labor publishes on employment outcomes for people with disabilities is limited for a variety of reasons. Disclosure about disability status is voluntary, thus the information about employment outcomes may be misleading. The collection of information on the employment outcomes of WIA participants is limited to those who are registered for services, and one-stops are not required to register customers who participate in self-service or informational activities. The performance measurement system may result in customers being denied services because local areas may be reluctant to provide WIA-funded services to job seekers who may be less likely to find employment.
Private banking has been broadly defined as financial and related services provided to wealthy clients. Such products and services may include deposit-taking, lending, mutual funds investing, personal trust and estate administration, funds transfer services, and establishing payable through accounts or offshore trusts. For purposes of this review, we defined offshore private banking as including (1) private banking activities carried out by domestic and foreign banks operating in the United States that involve financial secrecy jurisdictions, including the establishment of accounts for offshore entities, such as private investment companies (PIC)and offshore trusts; and (2) private banking activities conducted by foreign branches of U.S. banks located in these jurisdictions. Offshore entities that maintain private banking accounts provide customers with a high degree of confidentiality and anonymity while offering such other benefits as tax advantages, limited legal liability, and ease of transfer. Sometimes documentation identifying the beneficial owners of offshore entities and their U.S. private banking accounts is maintained in the offshore jurisdiction rather than in the United States. Although banking regulators believe that offshore private banking activities are generally used for legitimate reasons, there is some concern that they may also serve to camouflage money laundering and other illegal acts. The government’s reliance on financial institutions as the first line of defense against money laundering activities has increased with the adoption of enhanced suspicious activity reporting rules for banks issued jointly by Treasury’s Financial Crimes Enforcement Network and the federal depository institution regulators. The revised rules, which became effective April 1, 1996, require a bank to file a suspicious activity report pertaining to money laundering when a transaction at or above $5,000 (1) involves funds derived from illegal activities or efforts to disguise the nature of such funds, (2) is intended to evade the Bank Secrecy Act (BSA) requirements, or (3) is not a normally expected transaction for a particular customer and appears to have no lawful business purpose. Federal banking regulators consider “know your customer” (KYC) policies one of the most important components of an institution’s measures for understanding with whom it is doing business, recognizing unusual transactions, and detecting illegal or suspicious activities. These policies are intended to enable the institution to identify account owners and to recognize the kinds of transactions that a particular customer is likely to engage in. Although such policies are not currently required by regulation or statute, federal banking regulators expect institutions to incorporate KYC policies in their operations, and they have developed examination procedures for determining whether institutions have implemented such policies and related procedures. Federal banking regulators are currently in the process of developing a joint regulation and accompanying guidance intended to formally require banks to establish KYC policies. The Federal Reserve, the Office of the Comptroller of the Currency (OCC), and the Federal Deposit Insurance Corporation are responsible for reviewing banks’ anti-money-laundering efforts, including their KYC policies and procedures. The Federal Reserve and OCC have primary responsibility for examining and supervising the overseas branches of U.S. banks to ascertain the adequacy of their anti-money-laundering efforts.During the past 2 years, the Federal Reserve and OCC have focused attention on banks’ private banking activities in an attempt to ensure that they are not used for money laundering and are not a potential source of reputational or legal risk to banks. In 1996, the Federal Reserve Bank of New York (FRBNY) undertook a focused review of banks’ private banking activities in its district that included coverage of related offshore activities as well as a review of banks’ anti-money-laundering programs and KYC policies. This initiative reflected a heightened supervisory interest in the area arising from the growing market for private banking, banks’ increased reliance on private banking as a source of income, and a related increase in competition. Because of the concentration of private banking activities in the New York district and its focused efforts in the area, FRBNY assumed a key role within the Federal Reserve for the oversight of private banking activities. FRBNY, on behalf of the Federal Reserve Board, recently issued a paper on sound practices for private banking activities. At the time of our review, the Federal Reserve Board was also in the process of issuing a private banking examination manual and coordinating training for Federal Reserve examiners in the area of private banking. To determine how regulators oversee offshore private banking activities, we reviewed BSA examination manuals and other agency documents pertaining to the oversight of private banking and offshore banking activities. We also reviewed information on examination methodology in examination reports and, in a few cases, supporting workpapers. We spoke with FRBNY examiners; Federal Reserve Bank of Atlanta examiners; and OCC examiners in California, New York, and North Carolina to discuss specific monitoring practices related to banks engaged in offshore private banking activities. To identify deficiencies related to offshore private banking activities and corrective actions taken by banks, we reviewed 35 examination reports for 21 banks included in FRBNY’s private banking initiative. We also reviewed 21 OCC examination reports for 6 banks identified as actively involved in offshore private banking activities. The banks reviewed do not represent all banks that may be involved in offshore private banking activities. They are a subset of banks with a significant level of offshore assets in certain jurisdictions identified to be particularly susceptible to money laundering. (See app. I for more information on the methodology we used to identify banks actively involved in offshore private banking.) For the most part, examinations reviewed were conducted during 1996 and 1997. We interviewed FRBNY and OCC examiners to determine the extent to which general private banking deficiencies identified during examinations applied to the banks’ offshore private banking activities or to obtain their perspectives on the adequacy of corrective actions taken by banks. In addition, we followed up with selected banks to obtain an update on the status of corrective actions that were planned or in process during the last examination. To identify barriers associated with overseeing offshore private banking activities, we interviewed federal banking regulators and officials from the Financial Action Task Force (FATF), the Caribbean Financial Action Task Force (CFATF), the Basle Committee on Banking Supervision, and the Offshore Group of Banking Supervisors. We also interviewed officials of the Central Bank of the Bahamas, Cayman Islands Monetary Authority, and International Monetary Fund. In addition, we reviewed reports issued by FATF, CFATF, and the Offshore Group of Banking Supervisors. We also conducted literature searches on the laws of nine offshore jurisdictions selected for review and on their KYC policies and policies for reporting suspicious activity. The nine jurisdictions are the Bahamas, Bahrain, Cayman Islands, Channel Islands, Hong Kong, Luxembourg, Panama, Singapore, and Switzerland. (See app. I for more information on how we selected the nine offshore jurisdictions for review.) The information on foreign laws and policies in this report does not reflect our independent legal analysis but is based on interviews and secondary sources. To obtain industry views about regulatory access to beneficial owner documentation, we conducted a survey of 15 banks examined by FRBNY during its recent private banking initiative. We inquired about actions the banks had taken or planned to take to comply with FRBNY’s request for access to beneficial owner documentation, the impact of this request on the banks’ private banking business, and the potential impact on them if regulatory access to beneficial owner documentation became a requirement. Our work was done primarily in New York, NY; San Francisco, CA; and Washington, D.C., between December 1997 and April 1998 in accordance with generally accepted government auditing standards. Federal banking regulators may review banks’ efforts to prevent or detect money laundering in their offshore private banking activities during overall compliance or BSA examinations; safety and soundness examinations; or, more recently, during targeted examinations of their private banking activities. Regulatory oversight of banks’ anti-money-laundering efforts during these examinations reflects an attempt to assess the commitment of senior bank management to combatting money laundering while focusing on bank programs for complying with BSA, corporate KYC policies, and internal controls. In the course of these examinations, examiners are to ensure that banks’ anti-money-laundering programs identify high-risk activities, businesses, and transactions associated with foreign countries viewed to be particularly susceptible to money laundering. OCC’s BSA Manual, for example, cites transactions involving private banking and those involving offshore secrecy jurisdictions as warranting particular attention during examinations. The Federal Reserve also identifies private banking activities, including the establishment of offshore shell companies, as warranting supervisory attention; and it provides specific guidance to banks on sound practices for documenting and exercising due diligence in their conduct of such private banking activities. During examinations, examiners are also tasked with ensuring that banks’ compliance programs and KYC policies extend to their private banking activities, including those that involve offshore jurisdictions. Recognizing that offshore entities, such as PICs, that maintain U.S. private banking accounts tend to obscure account holders’ true identities, examiners are to look for specific KYC procedures that enable banks to identify and profile the beneficial owners of these offshore entities and their private banking accounts. In the course of examinations, examiners may test the adequacy of beneficial owner documentation maintained in the United States. However, with the recent exception of FRBNY, we found no evidence that examiners have attempted to examine the documentation that banks maintain in offshore secrecy jurisdictions. Examiners we contacted expressed varying views about accessing such documentation for examination purposes. Some examiners said that they do not see a need to examine documents maintained offshore if they are confident about the bank’s commitment to combatting money laundering and to exercising due diligence when establishing offshore entities through private banking accounts. A few examiners expressed reservations about their ability to compel banks, without the leverage of a KYC regulation, to obtain documents maintained in offshore jurisdictions. Others were uncertain about whether a request to export documents from certain offshore jurisdictions could violate their secrecy laws. During examinations conducted under its private banking initiative, FRBNY took a different supervisory approach that involved examiners seeking to review beneficial owner documentation regardless of where it was maintained. Because this was the Federal Reserve’s first focused review of private banking activities, verifying whether banks had the ability to identify and profile the beneficial owners of offshore entities that maintained U.S. private banking accounts was viewed as particularly important, according to officials. A senior FRBNY examiner explained that seeking out beneficial owner documentation was also a way to encourage banks to develop or improve their systems for maintaining appropriately detailed information on the beneficial owners of offshore entities that maintain U.S. accounts. Offshore branches are extensions of U.S. banks and are subject to supervision by host countries as well as U.S. regulators. However, they are generally not subject to the BSA and, therefore, U.S. banking regulators do not attempt to determine whether offshore branches are in compliance with this U.S. anti-money-laundering law. Instead, U.S. banking regulators attempt to identify the branches’ anti-money-laundering efforts and to determine whether the banks’ corporate KYC policies are being applied to activities, such as private banking activities, that these U.S. offshore branches may engage in. Although examiners are able to review the written policies and procedures being used in these branches, they must rely primarily on the banks’ internal audit functions to verify that the procedures are actually being implemented in offshore branches where U.S. regulators may be precluded from conducting on-site examinations or have restricted access to individual customer information. They may also rely on external audits, but they are apparently less prone to do so, because external audits tend to focus on financial rather than KYC issues. In our review of 56 examinations, we noted only 1 instance in which examiners relied on the work of an external auditor for a review of KYC procedures at a bank’s offshore branches. Regardless of whether examiners rely on internal or external audits, officials explained they can bring any significant or recurring problems identified in an offshore branch’s anti-money-laundering efforts to the attention of the bank’s board of directors for corrective action. Our review of 1996 and 1997 examinations conducted under the FRBNY’s private banking initiative found that the most prevalent deficiency related to offshore private banking activities was a lack of documentation on the beneficial owners of PICs and other offshore entities that maintained U.S. accounts. Our review of FRBNY and OCC examinations and discussions with examiners indicated that some deficiencies they identified that were related to private banking in general, such as inadequate client profiles and weak management information systems, also pertained to offshore private banking activities. We found that banks had started to take corrective actions to address the deficiencies, but improvements were still needed. We found that 9 of the 21 banks whose FRBNY examinations we reviewed were identified by examiners as lacking information on the beneficial owners of PICs and other offshore entities that maintained U.S. accounts. FRBNY identified this deficiency at seven foreign banks and two domestic banks. Although there is no current regulation mandating that banks retain information on the beneficial owners of these offshore entities in the United States, maintaining such information in clients’ U.S. files or having the ability to bring it on-shore in a reasonable amount of time promotes sound private banking practices, according to the Federal Reserve. We also found in our review of FRBNY and OCC examinations that examiners identified two U.S. banks with inadequate KYC policies at their offshore locations. Examiners found that one of the banks had insufficient KYC documentation and had not fully implemented a transaction monitoring process at its Switzerland branch. At the other bank, examiners noted inconsistencies between the local KYC policy in its Switzerland branch and the bank’s corporate policy. For example, the examiners noted that the local KYC policy did not address requirements for obtaining references or maintaining a documentation tracking system. FRBNY and OCC examination reports and our discussions with examiners indicated that some deficiencies relating to private banking in general, such as inadequate client profiles, were also applicable to banks’ offshore private banking activities. Examiners found that client profiles contained little or no documentation on the client’s background, source of wealth, expected account activity, and client contacts and visits by bank representatives. Regulators specify that adequate client profiles are a key component of a sound KYC policy because they enable the bank to more effectively monitor for unusual or suspicious transactions. Another general private banking deficiency pertaining to offshore private banking activities identified by examiners was weak management information systems. Examiners found that some banks’ management information systems did not track client activity or aggregate related client accounts. Regulatory KYC guidelines emphasize the importance of a sound management information system that can enable banks to track clients’ account activity and identify unusual or suspicious activity. FRBNY’s private banking initiative established guidelines to be used during examinations to monitor banks’ progress in implementing corrective actions, and OCC’s BSA examination guidelines also provide for the monitoring of corrective actions. Our review of FRBNY’s and OCC’s 1996 and 1997 examinations and our discussions with examiners and bank officials indicated that banks had started to take corrective actions to address deficiencies related to offshore private banking activities, but further improvements were needed. We noted that most banks were in the process of resolving the problem of a lack of documentation on the beneficial owners of PICs and other offshore entities that maintained U.S. accounts. Seven of the nine banks that did not have information on the beneficial owners of these offshore entities in their clients’ U.S. files were attempting to resolve the problem, with most either asking clients to sign confidentiality waivers or reconstructing information on the beneficial owners from documentation already in their U.S. offices. Of the remaining two banks, one provided examiners with the identity of the beneficial owners of several PICs that maintained accounts with the bank. The other bank, which offered services to offshore mutual funds, provided examiners with documentation certifying that the administrator of these funds had applied KYC policies to the shareholders (i.e., beneficial owners) of the funds. The two banks with inadequate KYC policies at their offshore locations were at different stages of correcting the deficiency. One of the banks had made changes to its KYC policies for its Switzerland branch to make them consistent with its corporate policies. The other bank had developed a corporate KYC policy and dedicated resources towards bringing its KYC policies at the Switzerland branch into compliance with its corporate policy, but both regulators and bank officials we spoke with indicated that greater progress was needed. Regulators told us that they were going to continue monitoring the situation. Most of the banks with inadequate client profiles were making progress on improving these profiles, but some shortfalls remained. Some of the banks were developing strategies to improve the documentation on their client profiles. For example, a few banks prioritized the process for updating their client base by focusing first on high-risk accounts such as those associated with PICs. We found that despite these efforts, regulators noted that some banks’ client profiles were still inadequate, and other banks were not updating their clients’ profiles in a timely manner. One bank official we spoke with explained that updating thousands of client profiles was much more time intensive than the bank had initially anticipated. We noted that most of the banks identified by examiners as having weak management information systems were either reviewing their systems or in the process of installing software to monitor unusual or suspicious transactions. Some bank officials and examiners indicated that regardless of what changes were being made to their systems, banks would continue to be unable to aggregate international accounts because some secrecy laws prohibit them from doing so unless clients sign confidentiality waivers. Bank secrecy laws of offshore jurisdictions represent significant barriers to U.S. regulators’ efforts to oversee offshore private banking activities. These secrecy laws, which are intended to preserve the privacy of individual bank customers, restrict U.S. regulators from accessing information on customers and their accounts and often prohibit regulators from conducting on-site examinations at U.S. bank branches in offshore jurisdictions. In some offshore jurisdictions, a bank employee found to have violated secrecy laws may be subject to criminal penalties, including imprisonment. Our review of nine offshore jurisdictions found some limitations that hindered U.S. and other foreign banking regulators’ access to bank information. Secrecy laws to protect the privacy of individual accounts were in effect in all nine jurisdictions, and five of them impose criminal penalties on bank employees found to be in violation of the law (see table 1). None of the nine jurisdictions typically provide foreign regulators with access to individual bank account information, and only two (Hong Kong and Singapore) have allowed U.S. regulators to conduct on-site examinations of banking institutions in their jurisdictions. Examinations in Singapore were limited to a review of bank policies and general operations. The jurisdiction did not allow examiners to access individual bank account or customer information. Another jurisdiction, the Cayman Islands, has not permitted foreign regulators to conduct on-site examinations of bank branches located within its borders in the past, but a Cayman Islands official told us that U.S. and other foreign regulators would be allowed into the Cayman Islands to assess the safety and soundness of branches of banks under the regulators’ supervision. The official emphasized, however, that foreign regulators would continue to be prohibited from looking at documents or files containing individual customer information. Seven of the nine jurisdictions reviewed provide for an exception to their secrecy laws when criminal investigations are involved. In such cases, officials of offshore jurisdictions explained that they have established judicial processes in their jurisdictions through which U.S. and other foreign law enforcement officials may obtain access to individual bank account or customer information. U.S. banking regulators are attempting to work around barriers related to offshore secrecy laws, but they remain hampered by limitations associated with these efforts. For example, in jurisdictions where they have been precluded from conducting on-site examinations, U.S. regulators rely primarily on banks’ internal audits to determine how well KYC policies and procedures are being applied to offshore branches of U.S. banks. In our review of examination reports, however, we found several instances in which examiners noted that the bank’s internal audit inadequately covered KYC issues pertaining to its private banking activities. At one major bank, we also observed that recurring deficiencies in KYC documentation, monitoring, and training identified by internal audits of the bank’s key private banking offshore branch were allowed to go unattended for several years. An examiner explained that this particular bank, which was undergoing major changes in its private banking operations, was in the process of correcting weaknesses identified by regulators, including branch management’s lack of responsiveness to identified internal audit deficiencies. Another difficulty impeding regulators’ attempts to rely on internal audits for overseeing offshore branches stems from U.S. regulators’ inability to review banks’ internal audit workpapers in some offshore jurisdictions that require the retention of such workpapers in the jurisdiction. Examiners explained that without access to supporting audit workpapers, it is difficult to verify that audit programs were followed and to assess the general quality of internal audits of offshore branches. One examiner added that without direct access to either bank documents or internal audit workpapers, it is difficult to explain to bank management the basis for regulatory concerns about particular activities conducted in their offshore branches. Other, more recent attempts by U.S. regulators to work around barriers related to offshore secrecy laws also have encountered limitations. For example, FRBNY’s previously discussed recent efforts to review beneficial owner documents represented an attempt to oversee private banking accounts maintained by banks operating in the United States for offshore entities. These efforts could not cover similar accounts or other private banking activities conducted on behalf of customers who deal directly with offshore branches of U.S. banks that are considered to be outside the purview of U.S. regulators. Finally, during 1998, U.S. regulators visited U.S. bank branches located in Hong Kong and Uruguay, also viewed as a financial secrecy jurisdiction. Although examiners were given full access to information requested in Hong Kong, this was not the case in Uruguay. An examiner explained that although U.S. examiners were not given complete access to account documentation in Uruguay, they were able to review the branches’ local KYC policies and related quality assurance reviews to help determine the extent of their anti-money-laundering efforts. We found that all nine offshore jurisdictions selected for review were engaged in some type of anti-money-laundering activities. Their activities ranged from participating in international task forces aimed at combatting money laundering to requiring their financial institutions to report suspicious activities. The efforts of individual jurisdictions may contribute to the international fight against money laundering. However, it remains uncertain what impact these efforts may have on how the offshore jurisdictions’ own banking sectors operate or on the extent to which their secrecy laws will continue to represent barriers to U.S. and other foreign regulators. All nine of the offshore jurisdictions reviewed are members of either the Basle Committee on Banking Supervision or the Offshore Group of Banking Supervisors (see table 2). Both of these international supervisory groups place special emphasis on the on-site monitoring of banks to ensure, for example, that they have effective KYC policies. Seven of the nine offshore jurisdictions reviewed are also members of either FATF or CFATF, international task forces created to develop and promote anti-money-laundering policies. Both of these task forces have agreed on recommendations that establish a basic framework for anti-money-laundering efforts in individual countries, including standard measures intended to increase the due diligence of financial institutions. For example, one of the recommendations adopted by the two task forces advocates that financial institutions be required to report suspicious activity to competent authorities. Membership in such organizations implies that the jurisdiction intends to work towards the organization’s principles and recommendations, including those related to financial institutions, such as establishing KYC policies and policies to report suspicious transactions. Membership, however, does not necessarily mean that these principles and recommendations are being adequately followed by the financial institutions or monitored by the jurisdiction’s government authorities. We found that eight of the nine offshore jurisdictions selected for review required banks to report suspicious transactions to their supervisory authorities (see table 3). However, according to CFATF officials, only a few of its members have an established authority that is capable of monitoring and acting on such reports. We also noted that eight of the nine offshore jurisdictions had established some form of KYC policies or guidelines for banks operating in their jurisdictions, but the extent to which such policies are actually being implemented and enforced in these jurisdictions has yet to be determined. Mutual evaluations periodically conducted by FATF or CFATF represent one indication of how well the organizations’ recommendations are being addressed by individual jurisdictions. All seven offshore jurisdictions that are members of FATF or CFATF have been assessed through a mutual evaluation (see table 3). Four jurisdictions were evaluated by FATF and three by CFATF. According to summaries of these evaluations, the Cayman Islands, Luxembourg, and Switzerland were viewed as having adequately addressed applicable recommendations, but Hong Kong and Singapore were noted as still in the process of implementing recommendations. The summary for Hong Kong identified some gaps in the jurisdiction’s legislative framework for combatting money laundering. The summary for Singapore indicated that although it had begun to address most of the recommendations, the extent to which they would be implemented was still uncertain. The mutual evaluations for Panama and the Bahamas had not been formally summarized at the time of our review. Officials from 15 banks we surveyed expressed a number of concerns over FRBNY’s request that they provide its examiners with documentation on the beneficial owners of PICs and other offshore entities that maintain U.S. accounts. One of their most prevalent concerns related to perceived inconsistencies within and among regulators regarding requests for access to beneficial owner documentation, which was of concern to 10 of the 15 bank officials. Some officials observed that only banks supervised by FRBNY were asked to provide access to this documentation, but banks supervised by the Federal Reserve in Atlanta and OCC were not. Officials from 9 of the 15 banks also expressed concerns over compromising their clients’ confidentiality. They indicated that providing FRBNY with access to documentation on the beneficial owners of PICs and other offshore entities that maintain U.S. accounts would likely displease their clients, who typically regard confidentiality as a valuable means of ensuring that their banking information is inaccessible to their home governments or to litigants filing lawsuits. Officials from 6 of the 15 banks were also concerned that if they complied with FRBNY’s request, their banks could be held liable for breaching confidentiality in the offshore jurisdictions. Another concern, which was expressed by officials from 7 of the 15 banks, involved a potential loss of business because of the “uneven playing field.” They believed beneficial owner documentation requirements created an additional burden for banks compared to other financial institutions, including securities broker/dealers, that are engaged in private banking activities, such as managing and maintaining accounts for PICs. Although these firms are engaged in private banking activities similar to those offered by banks, they are not yet subject to regulations requiring the reporting of suspicious transactions. Bank officials also expressed concerns over the effect pending KYC regulations might have on regulatory access to beneficial owner documentation. We sought the views of bank officials on two possible approaches to regulatory access to such documentation. The first approach would be for banks to routinely retain records in the United States on the beneficial owners of offshore entities that maintain U.S. private banking accounts. The second approach would be for banks to bring records on the beneficial owners of these offshore entities into the United States only if requested during an examination. We found that both approaches caused a similar level of concern, with some bank officials stating that the bank would need to make the same changes to how it maintains documentation on the beneficial owners of offshore entities in either case. Bank officials believed that for the most part, under both approaches, their banks would be at a competitive disadvantage with other financial institutions (e.g., securities broker/dealers, foreign banks) not subject to the same requirement. To a great extent, bank officials also said that either of these approaches would cause them to lose the business of foreign clients. See appendix II for the banks’ views on these two approaches. In spite of their concerns, officials from 11 of the 15 banks surveyed indicated that their banks had changed the way they maintain documentation on the beneficial owners of offshore entities that have U.S. accounts. Officials from the remaining four banks indicated that their banks already had such documentation in their U.S. files as a matter of bank policy. We found that 6 of the 11 banks that changed the way they maintain beneficial owner documentation were in the process of obtaining confidentiality waivers from their clients who were the beneficial owners of PICs and other offshore entities. Officials from the remaining five banks indicated that their banks could reconstruct information on the beneficial owners of these offshore entities from information they already maintain in their U.S. files. We found that of the 11 banks that changed the way they maintain documentation on the beneficial owners of offshore entities that have U.S. accounts, 9 were unable to provide us with specific information on the impact these changes have had on their private banking business. Officials from five of the nine banks indicated that it was too early to determine the impact because they had only recently begun this process. Officials from two banks were able to provide us with some preliminary information. In 1 case, the bank requested confidentiality waivers from 16 clients and reported that 15 of the clients agreed to sign waivers. The single client who refused to sign a waiver reportedly closed his account. In another case, the bank asked 116 clients to sign confidentiality waivers. In this case, 31 of the 116 clients, or 27 percent, did not sign the waivers. Twenty-six of these 31 clients transferred their accounts to the bank’s offshore affiliates; the other 5 clients closed their accounts, according to the bank officials we surveyed. The Federal Reserve and OCC provided written comments on a draft of this report. (See apps. III and IV). Both agencies generally agreed with our analysis and observations on the oversight of private banking activities involving offshore jurisdictions. We also obtained oral comments of a technical nature from the Federal Reserve and OCC that have been incorporated in the report where appropriate. As agreed with your office, unless you announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this letter. At that time, we will send copies of this report to the Ranking Minority Member of your Subcommittee and to the Chairmen and Ranking Minority Members of other interested congressional committees, the Chairman of the Federal Reserve Board, the Comptroller of the Currency, and the Chairman of the Federal Deposit Insurance Corporation. We will also make copies available to others on request. Major contributors to this report are listed in appendix V. Please call me on (202) 512-8678 if you or your staff have any questions about the report. We found in our prior work on private banking that there was no comprehensive database on the extent of private banking activities, let alone offshore private banking activities, by banks or other financial institutions operating in the United States. We also found that the most recent information identified on private banking in the United States was a general overview of the area, which did not consistently identify the providers that were engaged in international, specifically offshore, activities. Given this constraint, we attempted to identify banks that were actively involved in offshore private banking activities by first identifying banks with large amounts of assets in selected offshore jurisdictions, then determining through input from regulators if these banks were engaged in offshore private banking activities involving these jurisdictions. Our methodology is described in greater detail below. We identified 16 banks that were actively involved in offshore private banking activities. We supplemented this group of banks with information from the Federal Reserve Bank of New York (FRBNY) on banks in its district involved in private banking activities. This FRBNY information helped us identify an additional nine banks actively involved in offshore private banking activities. In total, we identified 25 banks for our review. It should be noted that these banks do not represent all banks that may be involved in offshore private banking activities, only a subset of banks with a significant level of offshore assets in certain jurisdictions identified to be particularly susceptible to money laundering. We applied the following steps to identify banks actively involved in offshore private banking activities. Step 1: Identified offshore jurisdictions that represent areas particularly susceptible to money laundering. We identified 17 offshore jurisdictions that were viewed as financial secrecy havens and particularly susceptible to money laundering. We identified these jurisdictions using information from the Internal Revenue Service, the Department of State, and the Economist Intelligence Unit of the United Kingdom. Step 2: Identified which of the 17 offshore jurisdictions had a “significant” amount of assets managed or controlled by banks operating in the United States. We identified nine jurisdictions—the Bahamas, Bahrain, the Cayman Islands, the Channel Islands, Hong Kong, Luxembourg, Panama, Singapore, and Switzerland—that had a significant amount of assets managed or controlled by banks operating in the United States. We identified these jurisdictions on the basis of a minimum threshold of $1 billion in total U.S. bank branch or subsidiary assets. Our source of asset information was a report generated by the Federal Reserve on foreign branches and subsidiaries of U.S. banks. Step 3: Identified banks with a significant amount of assets in one or more of the nine offshore jurisdictions identified in step 2. We used two thresholds, one for domestic banks and the other for foreign banks, to determine which banks had a significant amount of assets in any of the nine offshore jurisdictions selected for review. For domestic banks we identified 29 banks that met a minimum threshold of $1 billion. For the foreign banks we identified nine banks that met a minimum threshold of $10 billion. Our key sources of information were reports generated by the Federal Reserve on foreign branches and subsidiaries of U.S. banks and on non-U.S. branches that are managed or controlled by a U.S. branch or agency of a foreign (non-U.S.) bank. Step 4: Determined if banks identified in step 3 were engaged in offshore private banking activities involving any of the nine offshore jurisdictions. We asked banking regulators to verify whether the banks identified in step 3 were actively involved in offshore private banking activities. They identified 16 banks as actively involved in offshore private banking activities. Ten of these banks were supervised by the Federal Reserve, and the remaining 6 were supervised by the Office of the Comptroller of the Currency. As part of our survey of 15 banks that had been examined by FRBNY during its private banking initiative, we sought the views of bank officials on two approaches that were being considered by the Federal Reserve to regulatory access to documentation on the beneficial owners of PICs and other offshore entities that maintain U.S. accounts. The first approach would be for banks to routinely retain records in the United States on the beneficial owners of offshore entities that maintain U.S. private banking accounts. The second approach would be for banks to bring records on the beneficial owners of these offshore entities into the United States only if requested during an examination. We found that bank officials had a similar level of concern with both approaches, with some officials stating that the bank would need to make the same changes to how it maintains documentation on the beneficial owners of these offshore entities under either approach. Below are the questions from our survey that we used to solicit the views of bank officials on the two approaches to regulatory access to beneficial owner documentation. The tables show the number of bank officials who responded in a given category. Bank officials did not consistently provide their input on all of the categories; therefore, the responses in each row do not always add up to 15, the total number of banks surveyed. “If your bank were to routinely maintain records on the beneficial owners of offshore accounts in the United States for regulatory oversight purposes, how likely or unlikely would the following occur? (Please check one box in each row.)” Change the way you do business (e.g., ask clients to sign confidentiality waivers up-front) Lose business to other banks and/or financial institutions (e.g. brokerage houses) that do not have this requirement Lose business primarily of foreign clients who value their confidentiality Transfer accounts of foreign clients to offshore affiliates Change your approach to the private banking business (e.g, reduce the size or eliminate the bank’s private banking business in the United States) Other changes: Trust company in offshore location would have to counsel its clients to direct their assets elsewhere (i.e., outside of the United States) Other changes: Increase in civil litigation against clients because information will be readily available. “Alternatively, if your bank were to bring records on the beneficial owners of offshore accounts (e.g., PICs, trusts, offshore mutual funds) into the United States only upon request during an examination, how likely or unlikely would the following occur? (Please check one box in each row.)” Change the way you do business (e.g., ask clients to sign confidentiality waivers up-front) Lose business to other banks and/or financial institutions (e.g. brokerage houses) that do not have this requirement Lose business primarily of foreign clients who value their confidentiality Transfer accounts of foreign clients to offshore affiliates Change your approach to the private banking business (e.g, reduce the size or eliminate the bank’s private banking business in the United States) Other changes: Trust company in offshore location would have to counsel its clients to direct their assets elsewhere (i.e., outside of the United States) Other changes: Increase in civil litigation against clients because information will be readily available. Kane A. Wong, Assistant Director Evelyn E. Aquino, Evaluator-in-Charge José R. Peña, Senior Evaluator Gerhard Brostrom, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed U.S. regulatory oversight of private banking activities involving offshore jurisdictions, focusing on: (1) regulatory oversight procedures to ensure that offshore private banking activities are covered by banks' anti-money-laundering efforts; (2) deficiencies identified by banking regulators regarding offshore private banking activities and corrective actions taken by banks; (3) barriers hindering regulatory oversight of offshore private banking activities and efforts to overcome them; and (4) banking industry views regarding regulatory access to documentation pertaining to offshore private banking activities. GAO noted that: (1) federal banking regulators may review banks' efforts to prevent or detect money laundering in their offshore private banking activities during compliance or Bank Secrecy Act examinations; safety and soundness examinations; or during targeted examinations of their private banking activities; (2) to guard against offshore entities that maintain U.S. private banking accounts from being used for money laundering or other illicit purposes, examiners are to look for "know your customer" procedures that enable banks to identify and profile the beneficial owners of private banking accounts; (3) GAO's review of bank examination reports prepared under the Federal Reserve Bank of New York's (FRBNY) private banking initiative showed that the most common deficiency relating to offshore private banking was a lack of documentation on the beneficial owners of private investment companies (PIC) and other offshore entities that maintain U.S. accounts; (4) FRBNY and Office of the Comptroller of the Currency examiners noted other deficiencies during their respective examinations; (5) the bank examinations GAO reviewed, along with discussions with examiners and bank officials, indicated that most banks had started to take corrective actions to address deficiencies related to offshore private banking activities, but improvements were needed; (6) the nine offshore jurisdictions GAO identified for review have secrecy laws that protect the privacy of individual account owners, and five of them impose criminal sanctions for breaches of privacy; (7) moreover, federal banking regulators' attempts to work around restrictions associated with these secrecy laws are sometimes hampered; (8) GAO also found that all nine offshore jurisdictions selected for review were engaged in some type of anti-money-laundering activities; (9) although the efforts of individual jurisdictions may contribute to the international fight against money laundering, it is too early to ascertain their impact on money laundering or the extent to which the offshore jurisdictions' secrecy laws will continue to represent barriers to U.S. and other foreign regulators; (10) GAO surveyed officials from 15 banks that were asked by FRBNY to provide documentation on the beneficial owners of PICs and other offshore entities that maintained U.S. accounts; (11) GAO found that the officials had a number of concerns; and (12) in spite of these concerns, most officials indicated that their banks changed how they maintain documentation on offshore private banking activities in response to FRBNY's request for beneficial owner documentation.
Almost 537 million general-purpose credit cards were in circulation in the United States as of the end of 2012.some credit card issuers have offered college affinity cards, which are governed by contracts (agreements) between the issuer and an organization such as a college, university, or alumni association. The cards typically bear the organization’s logo. In return, the issuer makes payments to the organization based on factors such as the number of cards issued or the amount charged to the cards. College affinity cards Since at least the early 1990s, can be an effective way for issuers to market credit cards because college alumni often have an attachment to their schools. In addition, some credit card issuers, including banks and credit unions, offer college student credit cards—cards that are specifically labeled and intended for college students. As of November 2013, 6 of the 10 largest credit card issuers offered these cards, such as Citibank’s Citi Dividend Card for College Students and Bank of America’s BankAmericard Cash Rewards for Students. Issuers use these cards to help build a base of customers who may continue using the issuers’ credit cards after graduation. The terms and conditions of college student credit cards may be somewhat different from those of other credit cards—for example, they may have lower initial credit limits. card issuers and creditors not offer a student any tangible item to induce the student to apply for or participate in a credit card on or near the campus of the institution of higher education or at an event sponsored by or related to an institution of higher education. The act also requires credit card issuers to submit to CFPB each year the terms and conditions of any college affinity credit card agreement between the issuer and an institution of higher education or an affiliated organization in effect at any time during the preceding calendar year. In addition to a copy of any college credit card agreement to which the issuer was a party, issuers also must submit summary information for each agreement, such as the number of cardholders covered with accounts open at year-end (regardless of when the account was opened) and the payments made by the issuer to the institution or organization during the year. CFPB must submit to Congress, and make available to the public, an annual report that contains the information submitted by the card issuers to CFPB.annual report, which covered the 2012 calendar year. Since the CARD Act was passed in 2009, the numbers of college affinity card agreements and cardholders have decreased, according to data from the Federal Reserve and CFPB. From 2009 through 2012, the number of card agreements declined from 1,045 to 617 (41 percent). Similarly, the total number of cardholders for college affinity cards declined by 40 percent (see table 1). In 2012, 43 percent of the 617 college affinity card agreements were with alumni associations and 28 percent were with a college, university, or other institution of higher education (see fig 1). Among these organization types, the greatest decline in the number of card agreements since 2009 was for institutions of higher education (see fig. 2). In contrast, the largest decline in the number of overall cardholders occurred within alumni associations, while “other” organizations (a category that includes fraternities, sororities, and professional or trade organizations) had the largest decrease in the number of new cardholders (see table 2). In 2012, 23 credit card issuers offered college affinity cards.issuer—FIA Card Services, N.A., a subsidiary of Bank of America—had 412 of the 617 reported agreements (67 percent of the market). However, as seen in figure 3, the company’s market share has dropped since 2009. Most affinity card issuers had a small number of card agreements—for example, the majority of issuers had one or two affinity card agreements (see table 3). The payments that card issuers made to institutions with which they had affinity card agreements have decreased since 2009, consistent with the decline in the number of agreements and cardholders.table 4, affinity card issuers made payments of $50.4 million to participating institutions in 2012. The median payment in 2012 was about $5,000, while the average (mean) was about $82,000, with alumni associations receiving the highest average payment. Total payments declined by 40 percent between 2009 and 2012. The largest decline in payments was to alumni associations, while institutions of higher education had the largest decline in affinity card agreements over this period. However, during that period, the average payment to institutions of higher education increased by about $13,200, while the average payment to alumni associations decreased by about $13,700. The University of Southern California, through its agreement with FIA Card Services, received about $1.5 million, the largest payment to an institution of higher education in 2012. Among all the organizations, the Penn State Alumni Association received the largest payment in 2012— about $2.7 million, from FIA Card Services. In contrast, 22 percent of the agreements did not result in any payments to organizations in 2012. College affinity card agreements serve as contracts between the card issuer and the participating organization. Using a data collection instrument, we reviewed 39 agreements filed with CFPB, which represented about 38 percent of all cardholders covered by college affinity card agreements in 2011. The agreements typically covered such things as the card’s target market, marketing practices, and payments to the participating organization. As shown in figure 4, the length of time that the card agreements had been in effect varied. The oldest originally was signed in January 1991 and the most recent in December 2011. Many of the agreements had been amended or received addendums since they were first adopted, which in some cases extended the existing terms of the original agreement. In addition to credit cards, 30 of the 39 agreements included other financial products, such as deposit and checking accounts, automobile and home loans, and investment accounts. Some agreements included exclusivity provisions that restricted the organization from offering its members these products except in conjunction with the current affinity card issuer. The agreements identified which potential cardholders the issuer could solicit and how. Thirty-seven of the 39 reviewed agreements identified specific target customers for the college affinity card. Most often, issuers targeted alumni for the cards, but two-thirds of the agreements also allowed the issuers to solicit undergraduate students (see table 5).of the agreements identified multiple target populations for card solicitations. All but two of the 39 reviewed agreements included provisions requiring the organization to provide a list of its members to the issuer for marketing purposes. However, two-thirds of the agreements included mechanisms allowing the organizations to exclude members who requested that they not receive third-party solicitations. Nine of the reviewed agreements also included restrictions on soliciting student members, generally by restricting their inclusion on the provided lists. The agreements allowed card issuers to solicit potential cardholders through a variety of methods (see table 6). More than 80 percent of the reviewed agreements allowed issuers to use telemarketing, website links (such as from the alumni association’s website), direct mail, and print advertisements (such as in sport programs or member magazines). All of the reviewed agreements allowed the issuers to use more than one of the different methods that we tracked. All of the reviewed agreements included provisions allowing the card issuers to use the trademark or logo of the institution of higher education or organization. In some instances, the issuer could put these trademarks on gifts for individuals who completed applications or on other items. All but two of the agreements included provisions for obtaining prior approval of marketing materials from the organization or institution (to help ensure that the card issuer used the trademark or logo appropriately). All but one of the 39 reviewed agreements contained information about the payment arrangement between the issuer and the affiliated organization or institution of higher education. As shown in table 7, issuers most frequently provided payments to the organization or institution based on the number of new and open cards and the amount of money charged to the cards. Many included bonus payments for accounts that the organization or institution originated (as opposed to ones originated through the issuer), and three included bonus payments if the number of cardholders exceeded a threshold. Many of the agreements also included a guaranteed payment to the organization or institution that was not based on the number of cardholders or amount charged. The reviewed agreements sometimes contained payments based on other related products or included broader financial support to the organization or institution. For example, some payments under the agreements were based on balances of certificates of deposit or loans provided. In some instances, the reviewed agreements included support for scholarships or building renovations. About one-quarter of the 39 reviewed agreements included explicit consumer or cardholder protections or service standards. The consumer protections included restrictions on how often and how issuers could solicit group members, as well as restrictions on the sharing of the member database or student information with third parties. Two of the reviewed agreements also included metrics to assess servicing standards. For example, one agreement with an alumni association specified how quickly the issuer would answer and resolve calls and also required that the credit card terms and features (such as fees and annual percentage rates) be “best in class” when compared with a set of identified peer institutions. Additionally, most of the reviewed agreements included provisions allowing the issuer to make periodic adjustments to the card program and its terms and features. According to available data and representatives of card issuers and affiliated organizations, marketing of college affinity cards and college student credit cards directly to students appears to have declined. As of 2013, college affinity cards were not being marketed directly to students, according to representatives of issuers and affiliated organizations with whom we spoke. Four large issuers of affinity cards, representing 91 percent of the market (as measured by 2012 cardholders), said they did not actively market these cards to students— that is, they did not market on campus or specifically target students through direct mail, e-mail, print or broadcast media, or their other marketing venues. Representatives of five affiliated organizations with affinity credit card agreements corroborated these statements; they told us that the card issuers no longer marketed their affinity cards to students, focusing instead on alumni. The issuers noted that it was still possible that some students applied for college affinity cards because they would see the same marketing as the general public—such as advertising at bank branches, sporting events, or on issuer websites. Graduate students also may receive card solicitations from their undergraduate schools or alumni organizations. Officials from three affiliated organizations estimated that the percentage of their current cardholders who still were students was less than 3 percent and that these percentages had been declining. Before the enactment of the CARD Act in 2009, it was not uncommon for college affinity cards to be marketed to students. Representatives of four organizations with college affinity cards told us that at one time their cards were targeted to students, and, as discussed earlier, card agreements often specified students as a target market and required sharing student contact information for marketing purposes. However, many of the new agreements—and amendments to existing agreements—we reviewed that were put in place after 2009 expressly limit or restrict the marketing of college affinity cards to students. Views diverged on the extent to which the CARD Act was responsible for the decline in marketing of college affinity cards to students. One card issuer told us the act had little influence because the company had begun reducing marketing cards to students before the statute was enacted. A second issuer said it did not market to students because it sought more affluent customers, but it acknowledged the CARD Act also played a role by making it more difficult and less efficient to market to students—for example, placing restrictions on making prescreened credit offers to those under 21. Representatives of three organizations with college affinity cards told us they believed the CARD Act played a significant role in the decline of card marketing to students. Institutions of higher education also may have influenced this decline—for example, representatives of one college told us that undergraduate students were not included in its program or targeted for marketing, largely because the college did not want to be seen as pushing credit cards on its students. Marketing of college affinity cards overall—not just to students—has declined in recent years as many large issuers have diminished their presence in the marketplace. As discussed earlier, the number of agreements and cardholders declined by 41 percent and 40 percent, respectively, from 2009 through 2012. Three of the four affinity card issuers told us they were not actively seeking additional agreements. Specifically, one issuer said it was exiting the marketplace as existing agreements expired, one said it was evaluating the performance of its existing portfolio before deciding a future direction, and one said it was evaluating each agreement as it expired and did not regard its college affinity card business as strategically important. The fourth issuer noted that while it was seeking new agreements, it had ended many of its existing agreements because it did not see the program being sustainable over the long term. Although some card agreements provide payments to the affiliated organizations based on the number of card accounts, organizations told us they generally played a limited role in marketing the cards to their members. Three of the five organizations had sent e-mails to their members promoting the cards. One of these organizations told us it would like to do additional marketing of its own but that the issuer had been reluctant to permit this. Two organizations were concerned that participation in marketing could affect the tax status of their payments under the agreements. Active marketing of college student credit cards appears to have declined in recent years. We spoke with five issuers of these cards, which represented 39 percent of all general-purpose credit cards in circulation as of December 2012. All the issuers told us that as of 2013, they did not rely on active marketing to students to solicit potential cardholders. Active marketing includes methods such as direct mail, telemarketing, or e-mail. Instead, interested students could learn about the cards through issuer or third-party websites and bank branch offices. Representatives of affiliated organizations with whom we spoke confirmed they had observed a reduction in the marketing of credit cards to students in recent years. For example, they noted that issuers no longer conducted on-campus solicitations at sporting events and other university functions, as they had in the past. Two organizations told us they believed the decline in marketing of college student credit cards began by the early or mid-2000s, while three others said it began around 2009, when the CARD Act was enacted. According to annual surveys of college students conducted by Student Monitor, a market research firm specializing in the college student market, the number of students obtaining a credit card in response to a solicitation through direct mail or on campus has dropped significantly in recent years. The proportion of students reporting that they obtained a credit card as a result of a direct mail solicitation declined from 36 percent in 2000 to 6 percent in 2013 (see fig. 5). In 2013, students reported receiving significantly fewer mail (1.6) and e-mail solicitations (1.4) in a typical month than respondents in 2007 (5.6 and 9.1, respectively). Two issuers told us the decline in direct mail resulted in part from restrictions in the CARD Act on prescreened credit offers to those under 21. In 2013, fewer than 1 percent of students obtained their credit card as a result of an on-campus display or a company representative on campus (a practice known as tabling), as compared with 15 percent and 6 percent, respectively, in 2000. Two issuers told us they still used on-campus marketing but that they focused on their other financial products, such as checking accounts, and no longer accepted credit card applications at on- campus events. According to Student Monitor, more students have been acquiring their credit cards by initiating contact with the card issuer. For example, in 2013, 48 percent of students receiving a credit card applied in person at a bank (often the one with which they already had a deposit account), compared with 14 percent in 2000. Twenty-four percent received a card by initiating contact through the Internet or by telephone, compared with approximately 8 percent in 2000. Data are not available to definitively determine the effect that affinity cards and college student cards have had on student credit card debt. The effect of affinity cards may be limited because, as seen earlier, fewer students appear to hold these cards. The effect of college student cards is difficult to determine because the available data cover credit cards in general rather than college student cards in particular. However, students’ overall use of credit cards appears to have declined in recent years. Publicly available data do not allow a clear determination of the impact of college affinity and college student cards on student credit card debt. One limiting factor is that card issuers do not always ask on applications or know which of their cardholders are students. Additionally, while some data exist about the age of credit cardholders, age is not a reliable proxy for student status, especially as the age of college students has increased in recent years. Multiple studies have examined the factors influencing credit card use among students, but none that we identified focused specifically on the impact of college affinity or college student credit cards on student debt. For example, a 2012 study that examined the effect of the CARD Act reviewed affinity card agreements and surveyed college students on their use of credit cards, but it did not seek to determine the impact of particular types of credit cards. Similarly, surveys on student credit card use by Sallie Mae (a financial services company specializing in education) and Student Monitor, discussed later in this report, asked respondents about general credit card use but not specifically about affinity or college student credit cards. The effect of affinity cards on student credit card debt may be limited because fewer students appear to hold these cards, which generally have not been marketed specifically to college students since at least 2009. Representatives of two organizations with affinity cards estimated that 1 percent or less of their current cardholders were students, while a third organization estimated that 3 percent were students. The proportion of affinity cards held by students appears to have declined since the cards’ introduction. For example, one organization estimated that in the past, up to 15 percent of its cardholders were students, but that virtually none were at present. However, the exact prevalence of students holding affinity credit cards is not known. While the CARD Act requires issuers to submit information on college affinity card programs, including the number of cardholders, to CFPB, issuers are not required to report on the number of student cardholders. Three affinity card issuers with whom we spoke said that they could not identify which cardholders were students, or they considered cardholder information proprietary and therefore declined to share the information. We did not identify data that would allow a determination of the effect of college student credit cards in particular—as distinct from credit cards in general—on student credit card debt. Representatives of CFPB, researchers, and organizations that have studied credit card use told us that they were not aware of research or data sets specific to college student credit cards. While banks file quarterly reports with regulators that contain information on the banks’ credit card portfolios, these reports do not differentiate by type of card. Similarly, The Nilson Report, an industry trade journal that reports on credit cards, has not issued a report specific to student credit cards in more than 10 years. Four issuers of college student credit cards told us they were unable to share specific information on these cards or the student holders of their other credit cards because the information was not available or they considered such information proprietary. Even if comprehensive data on college student credit cards existed, the data’s value for understanding student credit card debt would be limited because many cardholders could continue to use their student cards after they ceased being students. While data specific to college affinity and college student credit cards are limited, available evidence suggests college students’ use of credit cards overall has declined in recent years. Annual surveys of college students conducted by Sallie Mae and Student Monitor represent two primary sources of information on student credit card use. The two studies suggest that the number of students owning credit cards declined in recent years. Student Monitor found that the proportion of college students holding credit cards declined from 53 percent in 2004 to 33 percent in 2013. owning credit cards decreased from 49 percent in 2010 (the first year it began collecting this information) to 29 percent in 2013 (see fig. 6). In the Student Monitor study, 72 percent of students who had a credit card in their own name in 2013 owned a single card, 21 percent had two credit cards, and 8 percent had three or more credit cards. Student Monitor, Financial Services – Spring 2013 (Ridgewood, N.J.: June 2013). Overall credit card ownership includes cards the students own and those for which they have permission to use (typically, parents’ cards). The number of students who had a credit card in their name similarly declined—from 46 percent in 2004 to 26 percent in 2013. See Student Monitor, 2013. Forty-five percent of all students with a credit card in their name charged $100 or less each month in 2013, according to Student Monitor. Fifty-nine percent used their card fewer than six times a month, including 8 percent who indicated that they did not usually use their credit card each month. On average, students charged $171 monthly, a decrease of 8 percent from 2012. Sallie Mae reported that students’ median reported balance for all cardholders was $179 in 2013, as compared with $289 in 2011. The survey also found that 2 percent of all students in 2013 with a credit card had a combined outstanding balance of more than $4,000, while 29 percent had a zero balance. Student Monitor found that the 28 percent of respondents who carried a balance had a median outstanding balance of $136. Credit limits for credit cards owned by students usually are lower than those for the general population and have been decreasing. In 2010, the median credit limit for all bank-type general credit cards was $15,000.contrast, Student Monitor found that more than 60 percent of credit cards owned by students had credit limits of $500 or less, and 80 percent were $1,000 or less. In 2000, 27 percent of respondents had credit limits of $1,000 or less, and 11 percent had limits of at least $5,000. However, because students may have multiple credit cards, their total credit card debt can be higher than the credit limit of any one card. Several studies provide information on college students’ payment patterns for credit cards: Payment amount. Student Monitor found that 72 percent of students reported paying their outstanding charges in full each month in 2013. Sallie Mae found that 52 percent of student respondents in 2013 paid in full each month in the previous year, and that 10 percent of students typically made only the minimum payment. Parental responsibility. The Student Monitor and Sallie Mae studies found that the college student, rather than the parent, was most often responsible for making credit card payments (79 percent in the Student Monitor study and 92 percent in the Sallie Mae study). Late payment fees. One quarter of students in the 2013 Student Monitor study reported paying a late payment fee at least once since acquiring a credit card, with almost half of that group incurring more than one late fee. Delinquent payment. Cardholders under 21 were more likely to experience minor delinquencies (30 or 60 days past due) than older cardholders, according to the Federal Reserve Bank of Richmond study. At the same time, young cardholders were substantially less likely to experience serious delinquency (90 days past due and longer). The study also found that cardholders who got their credit cards earlier in life were less likely to experience a serious default later in life. We provided a draft of this report to CFPB and the Federal Reserve. We incorporated technical comments from these agencies as appropriate. We are sending copies of this report to the appropriate congressional committees, CFPB, the Federal Reserve, and other interested parties. In addition, the report will be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. This report examines (1) the trends associated with and characteristics of college affinity card agreements, (2) the extent of marketing for college affinity and college student credit cards, and (3) what is known about the effect of the use of affinity cards and student credit cards on student credit card debt. We use “college affinity card” to refer to a credit card issued in conjunction with an agreement between a credit card issuer and an institution of higher education or an affiliated organization (such as an alumni association or foundation). We use “college student credit card” to refer to a credit card established for and targeted to college students. To identify the trends and general characteristics associated with college affinity card agreements, we reviewed the 2010 and 2011 Report to the Congress on College Credit Card Agreements, which the Board of Governors of the Federal Reserve System (Federal Reserve) issued, and the 2012 and 2013 College Credit Card Agreements: Annual Report to Congress, which the Bureau of Consumer Financial Protection (also known as the Consumer Financial Protection Bureau or CFPB) issued. These reports provide summary information, such as the total number of agreements in effect and number of accounts open at the end of the year. To determine the characteristics of the agreements, we first reviewed and analyzed affinity card agreements between issuers and schools or affiliated organizations from calendar years 2009-2012. We downloaded information for all those agreements in effect during those years from public databases managed by the Federal Reserve and CFPB. We analyzed the agreements to identify general trends and characteristics. We assessed these data by interviewing Federal Reserve and CFPB staff knowledgeable about the data and checking the data for illogical values or obvious errors. We found the data to be sufficiently reliable for describing the general characteristics and trends of the affinity card marketplace. Because we were most interested in current agreements, we focused our analysis on the agreements from 2011, the most recent year for which information was available. Twenty providers issued these agreements. For a more thorough analysis, we selected a nonprobability sample of 39 agreements from 574 agreements identified as being in effect as of January 1, 2012. We determined the sample by applying three criteria to the agreements. First, we included the 25 largest agreements overall, as measured by the number of cardholders. Second, we included the largest agreement from each issuer that provided affinity credit cards. Third, we included the five largest agreements with institutions of higher education, as measured by the number of cardholders under those agreements. These numbers do not add to 39 because agreements could meet criteria for inclusion under more than one category. We selected these criteria because we wanted to capture a large proportion of affinity cardholders as well as any potential variation among issuers or organizational type. We included the five largest institutions of higher education because we anticipated that those agreements could be more likely to include students as cardholders, a topic of specific interest. See table 8 for the list of reviewed agreements. Collectively, the agreements included in our sample covered about 38 percent of all cardholders in 2011. Twenty-six of the reviewed agreements were with alumni associations, 8 were with institutions of higher education, 3 were with foundations, and 2 were with other organizations. We reviewed these agreements and collected information using a data collection instrument (DCI) to gather characteristics such as their effective date, duration, allowed marketing practices and target populations, payments to the organization, consumer protections, and service standards. Findings from this limited review of 39 agreements cannot be generalized to the overall population of agreements in 2011. We developed the DCI after reviewing some of the 2011 agreements, focusing on items such as the scope, consumer protections, marketing practices, payments, terms, and fees. We converted the DCI to a pdf format for direct data entry. Three team members entered information on two agreements each using the DCI and discussed their experiences. We revised some questions for clarity and deleted others to avoid duplication. This version of the DCI was reviewed by a GAO survey specialist and an expert who surveyed students regarding their credit card use and conducted a similar review of the credit card agreements, and we incorporated minor changes. We further clarified that the review of agreements would focus on the most recent full agreement or amendment and those items that were still in effect as of January 1, 2012. Two team members then entered information about the agreements into the DCI. We verified our coding by comparing the original coder’s DCI responses with those of the second coder. For each comparison set, we compared the coding for 59 data elements and found discrepancies in fewer than 10 percent of the entries. This was determined to ensure a base level of reliability in the information collected. While the results of our review of the 39 agreements cannot be projected nationwide, they provide context and information related to the contents of the 39 agreements. To address the second and third objectives, we reviewed documents and interviewed representatives of credit card issuers, organizations and schools with affinity card agreements, and federal agencies, as well as academics and other individuals who have studied credit cards and their use by students. We reviewed studies—such as those by Sallie Mae (a financial services company specializing in education), Student Monitor (a market research firm that specializes in the college student market), and U.S. Public Interest Research Group (a consumer advocacy organization)—on student credit card use. We identified these studies through the Econ Lit database and general Internet searches using terms such as “college credit cards.” We focused on several years of Sallie Mae’s How America Pays for College studies and on Student Monitor’s Financial Services – Spring 2013 report, as well as recent reports by CFPB and the Federal Reserve Bank of Richmond. We assessed the quality of the survey data by interviewing Student Monitor and Sallie Mae officials knowledgeable about the data and checked the data for illogical values or obvious errors. We found the data to be of sufficient quality and reliability for providing general information on student credit card use. According to the Student Monitor, the estimates from the 2013 study had a 2.4 percent margin of error at the 95 percent confidence level. Because surveys are based on self-reporting of payment behaviors and estimated credit card debt levels, they may be prone to biases and not accurately represent actual behaviors and debt levels. The surveys were not designed to verify that information. Some researchers maintain that respondents sometimes underreport the quantity or level of characteristics that could be considered unflattering, such as the amount of outstanding credit card debt. We also reviewed marketing materials issuers used to market the cards. Lastly, we reviewed provisions of the Credit Card Accountability Responsibility and Disclosure Act of 2009 related to affinity credit cards and credit card use by those under 21. We interviewed representatives of four issuers of affinity credit cards— Bank of America (FIA Services), Capital One, Chase, and U.S. Bank— that had 91 percent of such cardholders in 2012. We also interviewed representatives of the five largest general credit card issuers, measured We discussed by 2011 portfolio size, as reported by The Nilson Report. with these issuers—American Express, Bank of America (FIA Services), Chase, Citibank, and Wells Fargo—any student credit cards currently or previously issued. To get a broader perspective on the use of these cards by financial institutions, we also interviewed representatives of three industry trade groups—the American Bankers Association, the Consumer Bankers Association, and the Credit Union National Association. We also interviewed representatives of six organizations and schools to discuss their affinity credit card relationships—the Association of Former Students of Texas A&M University, Boston University Alumni Association, Georgia Tech Alumni Association, Golden Key International Honour Society, Penn State Alumni Association, and Washington University. These six organizations were chosen because they had among the largest number of affinity cards (as determined by number of cardholders), had cards from the three affinity card issuers with the most agreements, and included one organization (Washington University) that had previously chosen to end its affinity agreement. To get a broader perspective, we also interviewed representatives of the National Association of College and University Business Officers. In addition, we interviewed representatives of CFPB, the Department of Education, the Federal Reserve, and the Office of the Comptroller of the Currency to discuss their oversight of affinity and student credit cards and trends they have observed in the industry. We talked with two academics who have studied and written about student credit card use, as well as representatives of Sallie Mae, Student Monitor, and the U.S. Public Interest Research Group. The Nilson Report is a twice-monthly trade journal that provides information on companies, products, and services from the payments industry. We conducted this performance audit from December 2012 to February 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, individuals making key contributions to this report were Jason Bromberg, Assistant Director; Amy Anderson; Kevin Averyt; Daniel Newman; Christopher H. Schmitt; and Michelle St. Pierre. In addition, key support was provided by Bethany M. Benitez; Kenneth Bombara; William Chatlos; Barbara Roesmann; and Jena Sinkfield.
Institutions of higher education, alumni groups, and other affiliated organizations may enter into agreements with credit card issuers for “college affinity cards,” in which issuers use the institution's name or logo in exchange for payments. Separately, some credit card issuers offer “college student credit cards,” which are expressly targeted to students. Partly in response to concerns about card issuer practices and rising student credit card debt, Congress passed the Credit Card Accountability Responsibility and Disclosure Act of 2009. The act includes consumer protections and requires disclosures specifically for consumers under the age of 21, including limits to on-campus credit card marketing and requirements for public disclosure of affinity card agreements. The act mandates that GAO review these agreements and assess their effect on student credit card debt. This report examines (1) trends associated with and characteristics of college affinity card agreements, (2) the extent of marketing for college affinity cards and college student credit cards, and (3) what is known about the effect of use of these cards on student credit card debt. GAO analyzed data from the Federal Reserve and CFPB, including a sample of 39 affinity agreements filed by the issuers. GAO also analyzed data on student credit card use or indebtedness, and interviewed officials from federal agencies, credit card issuers, and affiliated organizations. Trends associated with college affinity card agreements include fewer agreements and cardholders and declining payments, according to data GAO analyzed from the Board of Governors of the Federal Reserve System (Federal Reserve) and the Bureau of Consumer Financial Protection (CFPB). The number of affinity card agreements declined from 1,045 in 2009 to 617 in 2012 (41 percent). More than 70 percent of the agreements in 2012 were with institutions of higher education or alumni organizations, and one issuer—FIA Card Services, a subsidiary of Bank of America—had 67 percent of all agreements. Affinity card issuers paid $50.4 million to all organizations in 2012, 40 percent less than in 2009. In most cases, payments were based on numbers of cardholders and the amount spent on the cards. The card agreements covered contractual obligations related to such things as marketing practices, target populations, use of the organization's logo or trademark, terms of payment, and, in some cases, service standards. Student-focused marketing of affinity and student cards on campus appears to have declined. Four large affinity card issuers GAO interviewed (representing 91 percent of cardholders) said that they primarily targeted alumni and no longer marketed affinity cards directly to students. In interviews with GAO, institutions of higher education and affiliated organizations agreed that affinity card marketing directly to students had ceased. In addition, five of the nine largest overall credit card issuers that also issue college student credit cards told GAO they no longer actively marketed these cards (such as through direct mail, e-mail, or on-campus activity), but rather relied upon websites and bank branches. Representatives of five institutions with large affinity card agreements told GAO that they generally noticed a decline in on-campus credit card marketing in recent years. Consistent with these observations, available data show a decline in card solicitations to students in recent years. For example, a survey of students in 2013 by Student Monitor, a research firm, found that 6 percent of students reported obtaining a credit card as a result of a direct mail solicitation, compared with 36 percent in 2000. Data are not available to definitively determine the effect that affinity cards and college student credit cards have had on student credit card debt. For affinity cards, the effect may be limited because fewer students appear to hold such cards. For college student credit cards, the effect is difficult to determine because data are available for credit cards in general but not for student credit cards in particular. However, students' overall use of credit cards appears to have declined in recent years. For example, Student Monitor reported 33 percent of students owned credit cards in 2013 versus 53 percent in 2004, a trend corroborated by several other studies that GAO identified. But Student Monitor found that students with credit cards in their names increasingly obtained the cards before starting college. In addition, it found that in 2013, students charged an average of $171 monthly on their cards, 80 percent of the cards had a credit limit of $1,000 or less, and 72 percent of students said they paid their outstanding charges in full each month. Student Monitor also reported that one quarter of students in 2013 paid a late payment fee at least once since they acquired the credit card, with almost half of those paying more than once. GAO makes no recommendations in this report.
USDA relies on telecommunications systems and services to help it administer federal programs and serve millions of constituents. From telephone calls to video conference meetings to providing nationwide customer access to information, USDA reports that it spends about $219 million annually for a wide array of telecommunications technology.Voice and data communications, provided by the federal government’s FTS 2000 program, and hundreds of commercial carrier networks help the department’s 31 departmental offices and agencies and thousands of field offices carry out USDA’s broad missions and serve customer needs. In 1995 and 1996, we reported that USDA was not cost-effectively managing and planning its substantial telecommunications investments and was wasting millions of dollars each year as a result. Specifically, we found that USDA was paying for unnecessary or unused telecommunications equipment and services because of breakdowns in management controls. For example, we found that USDA had been paying tens of thousands of dollars annually for leased telecommunications equipment, such as rotary telephones and outdated computer modems, that it no longer even had. USDA was wasting as much as $5 million to $10 million annually because the department had not acted on opportunities to consolidate and optimize its FTS 2000 telecommunications services. USDA agencies were spending hundreds of millions of dollars developing redundant networks that perpetuate long-standing information sharing problems because the department was not adequately planning departmentwide telecommunications in support of USDA’s information sharing goals. USDA had hundreds of cases of telephone abuse because the department lacked adequate controls over the millions of dollars it spends each year on commercial telephone services. Many of these cases involved inappropriate collect calls made from individuals in 18 correctional institutions, accepted and paid for by USDA, and then possibly transferred to other USDA long-distance lines. We made numerous recommendations in our reports to help USDA correct these problems. Given the seriousness of these management weaknesses and the waste we found, we also recommended in 1995 that the Secretary of Agriculture report the department’s management of telecommunications as a material internal control weakness under the Federal Managers’ Financial Integrity Act (FMFIA). Under federal law, government agencies are required to properly and cost-effectively manage all information technology investments, including telecommunications. To do this, agencies must have processes and practices established that ensure sound planning and information technology decision-making, and cost-effective management and use of information technology investments. To further strengthen executive leadership in the management of information technology, the Congress enacted the Clinger-Cohen Act of 1996, which created a chief information officer (CIO) position in federal agencies and emphasized the need for instituting sound management practices to maximize the return on information technology investments. In August 1996, the Secretary of Agriculture established a CIO position and in August 1997 designated the Deputy Assistant Secretary for Administration as USDA’s first CIO. The CIO, who reports to the Secretary, is responsible for providing the leadership and oversight necessary to ensure the effective design, acquisition, maintenance, use, and disposal of all information technology by USDA agencies, which include telecommunications, and for monitoring the performance of USDA’s information technology programs and activities. To address our objective, we reviewed agency documentation and interviewed USDA officials to identify the department’s actions to address our recommendations to (1) establish sound telecommunications management practices, (2) consolidate and optimize FTS 2000 telecommunications services for savings, (3) plan networks in support of information and resource sharing needs, and (4) correct telephone abuse and fraud. To assess the adequacy of these corrective actions, we reviewed plans, studies, activity reports, and other documentation at USDA headquarters, USDA’s National Finance Center (NFC), and agency offices and discussed the status and progress of actions taken with USDA officials. We also reviewed studies as well as vendor billing information for FTS 2000 and commercial services to evaluate the results of USDA’s corrective actions. Appendix I provides further details on our objective, scope, and methodology. We conducted our review from August 1997 through April 1998 in accordance with generally accepted government auditing standards. We provided a draft copy of this report to USDA for comment. USDA’s comments are discussed in the report and are included in full in appendix II. In 1995, we reported that USDA lacked sound management practices over its large annual telecommunications investments and was not cost-effectively managing these investments. Because of this, the department wasted millions of dollars each year paying for unnecessary or unused telecommunications services and equipment, and services billed but never provided. We therefore recommended that USDA should report its management of telecommunications resources as a material internal control weakness under the Federal Managers’ Financial Integrity Act (FMFIA) and take immediate and necessary steps to ensure that all telecommunications resources are properly managed and costs are effectively controlled. USDA agreed that it has to do a significantly better job managing its telecommunications investments. It reported telecommunications management as a material management control weakness in its fiscal year 1996 and fiscal year 1997 FMFIA reports, and began improvement initiatives to reengineer telecommunications management, audit telephone invoices, establish telecommunications inventories, and strengthen departmentwide policy. By implementing improvements such as reengineering telecommunications management, the department reported in November 1997 that its telecommunications costs could be reduced as much as $30 million annually. However, to date, USDA has not fully implemented the revised and improved management practices. As a result, it has neither achieved significant savings nor substantially strengthened telecommunications management. Under the Federal Managers’ Financial Integrity Act of 1982 (31 U.S.C. 3512), agencies must establish internal controls to reasonably ensure that agency assets are effectively controlled and accounted for. Agencies must also annually report material weaknesses in these controls to the President and the Congress and describe plans and schedules for correcting these weaknesses. Given the lack of sound management practices over telecommunications and the serious management weaknesses we found at USDA, we recommended in our 1995 report that the Secretary of Agriculture report the department’s management of telecommunications as a material internal control weakness under FMFIA. We also recommended that this weakness should remain outstanding until USDA institutes effective management controls. In response to our recommendations, USDA reported its overall management of telecommunications as a material management control weakness in its fiscal year 1996 FMFIA report. Specifically, the report generally discussed corrective actions planned or underway to address (1) inadequate telecommunications management and network planning, (2) opportunities to consolidate and optimize telecommunications services for savings, and (3) telephone abuse. In USDA’s FMFIA report for fiscal year 1997, the department continued to report telecommunications management and network planning and the management of telecommunications services as material weaknesses, stating that estimated completion dates to resolve these weaknesses have been delayed. Specifically, the report states that USDA extended the expected completion date 1 year for resolving its telecommunications management and network planning weaknesses, from fiscal year 1998 to fiscal year 1999, and 2 years for addressing opportunities to consolidate and optimize telecommunications services for savings, from fiscal year 1998 to fiscal year 2000. “The processes of planning, acquiring, ordering, billing, invoicing, inventory control, payments, and management of telecommunications services and equipment chaotic at best and totally out of control at the very least. These processes are disparately performed across agencies and even within agencies. The capability to plan, review, and capitalize on USDA telecommunications investments is far beyond the reach of any USDA manager to make rational decisions based on hard inventory and billing facts. Agency managers who are responsible for telecommunications services have neither the information they need to manage these resources nor the billing/invoice information to ensure that USDA is receiving the services it ordered and for which it is being billed. The systems/processes are outdated and broken.” The task force recommended a series of critical and essential actions to begin to address these problems. It identified business process reengineering of telecommunications management activities across the department as the most critical action for fundamentally improving the processes and systems supporting telecommunications management. The activity included, among other things, redesigning approaches for obtaining and reviewing billing information through electronic data interchange (EDI) and creating management processes that (1) reduce payments made for services not received and equipment not owned, (2) promote increased resource sharing between agencies, and (3) provide accurate and timely reports to agency managers for monitoring the cost-effective use of all telecommunications resources. Later in February 1996, the Deputy Assistant Secretary for Administration and the acting CFO accepted the task force’s recommendation to complete a telecommunications reengineering study within 6 months, and pilot test and implement reengineered telecommunications management processes throughout the department within 24 months. USDA has reported that it expects to correct its most serious management weaknesses through this effort and, at the same time, save up to $30 million annually by streamlining administration of telephone bills and validating agency payments made to telephone companies to eliminate unnecessary charges for services, lines, and features that are not in use. However, USDA did not complete its reengineering study until August 1997 and does not expect to have its reengineered telecommunications management processes fully implemented before September 1999, at the earliest, which is 3-1/2 years after USDA accepted the task force’s recommendations. Much of this delay occurred because USDA’s reengineering effort, although critical, lacked effective direction and oversight. For example, it took USDA nearly 4 months (from February 1996 to June 1996) to form a project team for the reengineering study. Project officials said further delays resulted from the lack of clear direction over project activities. This was because management responsibility for the work on the study was split among the Deputy Assistant Secretary for Administration and acting CFO and an executive review board made up of program and management officials. Concurrent with the reengineering effort, USDA began additional initiatives to address other management improvement and cost-savings recommendations we made. For example, because USDA agencies do not generally review commercial telephone bills to verify charges, we reported that the department was paying tens of thousands of dollars for leased telecommunications equipment and other services it had not used for years. We therefore recommended that USDA review commercial telephone bills for accounts over 3 years old to identify instances where the department may be paying for services that are no longer being used. Following the Secretary’s direction, in May 1996, USDA’s acting CFO and NFC began a one-time audit of all commercial telephone invoices. To do this, copies of all billing invoices paid to telephone companies for a 1-month period in 1996 were sent by NFC to USDA agencies for verification. The audit involved the review of over 25,000 paper invoices. Agencies and offices were asked to identify duplicate services, unnecessary services, and services billed but not received. As of March 1998, the audit was about 90 percent complete and had identified about $470,000 in annual savings. USDA expects to recoup the overall cost of this audit from the savings achieved during the first year. Opportunities to save millions more were also identified when it was disclosed that USDA agencies were paying tens of thousands of dollars each month for thousands of unused FTS 2000 e-mail boxes. As a result, more than half of USDA’s 15,953 FTS 2000 e-mail accounts were disconnected, reducing USDA’s telecommunications costs by about $3.3 million. In one case, for example, we were told that the Secretary’s office found it had been paying monthly storage charges for an FTS 2000 e-mail box for a former Secretary who had left the department in 1993. Efforts to identify and eliminate additional unused e-mail accounts are continuing. In 1995, we reported that USDA and its agencies lacked basic information describing what telecommunications equipment and services USDA uses and what it pays for these resources because telecommunications inventories had not been established by the department. As we pointed out in our report, inventories are fundamental to sound telecommunications management and are necessary, among other things, to identify telecommunications resources that are outdated or no longer used and ensure that agencies pay for only those services that they use. Consequently, we recommended that the department take immediate steps to ensure that departmentwide telecommunications inventories were established and properly maintained. In response to our report, the CIO’s office began work with a contractor to help the department establish telecommunications inventories. As part of this effort, the contractor (1) prepared a plan for conducting inventories departmentwide and (2) initiated a pilot project to conduct a physical inventory of telecommunications equipment at six sites for two USDA agencies in the Washington, D.C., area. At just these six sites, the contractor found the USDA offices were being billed more than $200,000 annually for inactive lines, active lines not in use, and lines that could not be identified. However, the department did not implement the contractor’s plan and did not act to ensure that all unneeded or unused services were eliminated. Although the contractor’s plan was not implemented, USDA has taken other actions to begin collecting inventory information. Specifically, in connection with efforts now underway to test USDA’s reengineered telecommunications management processes and address Year 2000 readiness, the CIO’s office told USDA agencies to have their telecommunications inventories completed by July 1998. Until USDA establishes inventories and fully tests and implements improved telecommunications management processes departmentwide, USDA cannot ensure that unnecessary or unused services have been discontinued. In 1995, we also recommended that USDA establish and implement procedures necessary to ensure that all unneeded telecommunications services are terminated at offices that close or relocate. Since passage of the Federal Crop Insurance Reform and Department of Agriculture Reorganization Act of 1994, USDA has closed or relocated about 1,300 field offices and plans to close or relocate hundreds more in the next few years. Effective procedures are essential to precluding payments for services at offices after they have been closed or relocated. While the CIO’s office revised the department’s telecommunications policy in March 1996 to require USDA agencies and offices to ensure the termination of telecommunications services at offices that close or relocate, CIO officials said that they did not monitor agencies’ compliance with this policy. Accordingly, USDA does not know whether the policy had been implemented throughout the department. Telecommunications managers at two USDA agencies we spoke with said that they had not done reviews of billing records to ensure that telecommunications services were terminated for all of their offices that had closed or relocated. In fact, cases have been identified by USDA in which the department continued to incur service charges at agency offices that had closed or relocated. For example, one USDA agency told us that the department continued to pay a total of about $90,000 for vendor-provided services for an office in Florida that had been closed since 1984. After identifying this case, the agency telecommunications manager terminated the service in October 1997 and sought reimbursement from the vendor for some of these charges. In 1995, we also reported that USDA was missing millions of dollars in savings because the department had not consolidated and optimized FTS 2000 telecommunications services where there were opportunities to do so. Such savings opportunities existed because, over the years, hundreds of field office sites across the department had obtained and continued to use separate, and often times redundant, telecommunications services at office sites where multiple USDA agencies are located within the same building or geographic area. Therefore, we recommended that USDA identify and act on opportunities to consolidate and optimize FTS 2000 telecommunications services and preclude departmental agencies and offices from obtaining and using redundant services. USDA agreed with our recommendation and began a departmentwide initiative, called Initiative 6, that used a network analysis tool to identify instances in which USDA agencies and offices located in the same building could consolidate and optimize FTS 2000 telecommunications services for savings. By November 1995, USDA agencies and offices had been provided with 775 specific opportunities to eliminate FTS 2000 redundant services. USDA eliminated about $3.2 million in redundant FTS 2000 services under this effort but took no action on nearly half of the Initiative 6 cost-savings opportunities and terminated the initiative. The CIO’s office later reactivated Initiative 6 after we began our review and, once again, identified additional opportunities for savings. However, CIO officials told us that they were not actively following up on these because new priorities, such as the need for USDA agencies to ensure Year 2000 compliance, were consuming most of the agencies’ information technology staff resources. USDA also began tracking agency purchases of FTS 2000 services. As part of the department’s moratorium on information technology investments, established by the Deputy Secretary in November 1996, the CIO’s office began reviewing individual agency requests for new telecommunications services and equipment to help ensure that opportunities to share resources among agencies and offices are considered before telecommunications services are acquired. The CIO’s office also created a new centralized management structure for ordering FTS 2000 telecommunications services to help eliminate agency purchases of redundant services. Under these new procedures, which are still being implemented, USDA has reduced the number of individuals throughout the department who are authorized to purchase new telecommunications services and equipment by about 77 percent from 332 to 75 and has required agencies to forecast their planned telecommunications purchases in advance to identify opportunities for savings. In September 1995, we reported that USDA had hundreds of stovepipe networks and systems, built by its agencies, that hinder information sharing. This situation evolved over time because USDA allowed its agencies to build their own separate stovepipe networks. Even though the department had often acknowledged that it had a pressing need to overcome this problem, we found that USDA agencies continued to spend hundreds of millions of dollars to develop redundant networks that could not interoperate and could not share information. We recommended in 1995 that USDA determine the interagency information sharing requirements necessary to effectively carry out the department’s crosscutting programs and plan networks in support of information and resource sharing needs. Despite some initial efforts to develop a draft information systems technology architecture, USDA has not yet identified business data needs and information sharing requirements for the department. The Clinger-Cohen Act of 1996 requires agency CIOs to develop, maintain, and facilitate integrated information systems architectures for evolving or maintaining existing information technology and acquiring new information technology to achieve the agency’s strategic goals and information resources management goals. An effective systems architecture should be derived by systematically and thoroughly analyzing and defining agencies’ target operating environments, including business functions, information needs and flows across functions, and system characteristics required to support these information needs and flows. However, according to a contractor’s January 1998 assessment, USDA’s initial architecture does not identify many of the kinds and/or types of data used in the department and does not provide a clear foundation for a seamless flow of information and interoperability among all agency systems that produce, use, and exchange information. According to the CIO’s office, work is still underway to capture data on information flows and needs and this work will not be completed until September 1999. Concurrent with this ongoing work to identify data requirements, the CIO’s office has begun evaluating USDA’s current network structure. As a first step, the CIO’s office used a contractor’s network design tool to identify or map, for the first time, the department’s existing data networks so that redundancies may be eliminated and economies may be gained. When this work is complete in June 1998, project officials said the CIO will begin considering design alternatives for migrating to a departmentwide enterprise network that is intended to satisfy the connectivity needs of USDA information technology systems, processes, and users. However, USDA does not plan to have completed its work identifying business data and information sharing requirements by that time. USDA officials stated that while the department does not now and may never fully understand its business requirements, it can nonetheless design its new departmentwide enterprise network. By moving forward on an enterprise network without completing an architecture that defines USDA’s business data and information sharing requirements, USDA runs the risk of investing in a network that may not fully support its strategic business/program and operational needs. As we have reported in the past, agencies have experienced significant problems and cost increases by trying to design and build information and network systems without a systems architecture that defines business needs. For example, we found that incompatibilities among air traffic control systems cost the Federal Aviation Administration (FAA) $38 million to fix because it began building these systems without completing a systems architecture that defined requirements and standards governing information and data structures and communications. In another case, after the Internal Revenue Service (IRS) spent $3 billion attempting to modernize its tax systems without adequately defining its business needs in a systems architecture, it was unable to demonstrate benefits commensurate with these costs and had to restructure its modernization effort. In April 1996, we reported that USDA lacked adequate controls for ensuring that its telephones were properly used. As a result, the department, which spends tens of millions of dollars each year on commercial telecommunications services, had experienced hundreds of cases of telephone abuse in the Washington, D.C., area and was at risk of further abuse and fraud. We recommended that USDA determine its risk of and vulnerability to telephone fraud, waste, and abuse departmentwide and develop and expeditiously implement an appropriate plan with cost-effective controls to mitigate these risks. In the interim, we recommended that the department identify and implement cost-effective actions to minimize USDA’s exposure to telephone abuse. Following our report, USDA identified telephone abuse at the department as a material management control weakness in its fiscal year 1996 FMFIA report, and took a number of positive steps to reduce telephone abuse in USDA’s Washington, D.C., headquarters offices. For example, in October 1996, USDA began blocking collect calls in all of its Washington, D.C., area offices, and the hundreds of inappropriate collect calls from individuals in correctional institutions have been significantly reduced. Also, the CIO’s office implemented procedures for obtaining and reviewing the local carrier’s monthly telephone bill for the Washington, D.C., area to identify questionable long distance calls as well as other potentially inappropriate charges. After taking these actions, USDA reported in its fiscal year 1997 FMFIA report that corrective actions on telephone abuse were completed. However, the department has not determined the risk of and vulnerability to telephone fraud, waste, and abuse departmentwide as we recommended, nor has it developed and implemented an appropriate plan with cost-effective controls to mitigate these risks. The CIO official responsible for telecommunications operations told us no further action was taken on our recommendation because USDA believed that the risks of departmentwide telephone abuse and fraud would be better addressed by implementation of the department’s reengineered telecommunications management processes, which will allow agencies and offices to review and verify telephone billing information. However, as discussed earlier, work on this project is not complete and full implementation of the reengineered processes is not expected before September 1999. Therefore, until that time, USDA agencies and offices outside of the Washington, D.C., area remain at risk for telephone abuse and fraud. Although USDA agreed with our 1995 report on the need to resolve its telecommunications management weaknesses and has identified millions in potential savings, it lacks an effective action plan for implementing these necessary improvements. Specifically, USDA has not established a plan that (1) assigns clear responsibility and accountability for initiatives intended to correct the department’s telecommunications management weaknesses, (2) coordinates and integrates these initiatives, (3) sets priorities, time frames, and milestones for their completion, (4) establishes procedures for monitoring activities to ensure they are carried out, and (5) allocates necessary resources. In December 1997, the CIO issued a plan of action for resolving the department’s long-standing problems managing information technology. This plan, which was prepared in response to the Secretary’s May 1997 request, discusses telecommunications as one of five major areas and provides a general description of the goals and objectives of ongoing initiatives to reengineer and improve departmentwide telecommunications management and lists tasks associated with these efforts. However, the plan does not adequately describe how needed corrective actions will be implemented, nor does it specify clear time frames, milestones, and resources associated with all these efforts. Specifically, while the plan lists tasks associated with the telecommunications improvement initiatives, it does not describe how USDA intends to carry out all these tasks. For example, the plan lists a project to consolidate and optimize telecommunications services in the Washington, D.C., area to provide more effective and economical telecommunications systems. But the plan provides no information describing the project’s activities and how these activities will need to be integrated with numerous other planned or ongoing efforts to consolidate and optimize services, nor does it discuss milestones, time frames, and resources necessary for carrying them out. In addition, the CIO’s action plan also does not designate a specific senior-level official with overall, day-to-day responsibility, authority, and accountability for managing and coordinating all of the department’s separate telecommunications initiatives. Instead, the plan generally assigns responsibility for tasks to the CIO’s office and other USDA agencies and offices, but does not identify responsible individuals, provide them requisite authority, and make them accountable for ensuring that these tasks are fully carried out. For example, while the CIO’s Associate Director for Telecommunications Services and Operations acknowledged having responsibility within the CIO’s office for many corrective actions, this official said she did not have the overall authority necessary to direct and coordinate departmentwide action on all telecommunications improvements and cost-savings efforts. Instead, she could only attempt to get agencies and offices to act on such efforts through a process of consensus-building. Without an action plan that establishes clear lines of responsibility, authority, and accountability for directing and implementing departmentwide telecommunications improvements, many of USDA’s corrective actions will likely not be fully implemented. After more than 2 years, USDA has not fully implemented our recommendations. It continues to miss identified opportunities to achieve the total estimated $70 million in annual savings and cannot ensure that telecommunications resources are cost-effectively managed across the department. It has undertaken some initiatives that have saved several million dollars, but these initiatives are uncoordinated, poorly managed, and do not address all of USDA’s telecommunications weaknesses. Further, USDA has not established an overall plan or strategy for directing and integrating these separate improvement efforts and for ensuring that critical corrective actions are cost-effectively and promptly implemented throughout the department. A major factor contributing to this situation is that no one at USDA has been given overall responsibility, authority, and accountability for doing so. We recommend that the Secretary of Agriculture direct that the CIO complete and implement a departmentwide corrective action plan that fully addresses all of our recommendations for resolving the department’s telecommunications management weaknesses and achieving savings wherever possible. In addition, we recommend that the Secretary, in consultation with the CIO, assign a senior-level official with day-to-day responsibility and requisite authority for planning, managing, and overseeing implementation of this plan and for ensuring that all telecommunications management improvements and cost-savings activities are effectively and fully carried out. We further recommend that the Secretary of Agriculture direct the CIO to periodically report to the Secretary on the department’s progress (1) implementing this corrective action plan and (2) achieving the estimated $70 million in annual savings identified by the department. USDA’s CIO provided written comments on June 15, 1998, on a draft of this report. USDA’s comments are summarized below and reproduced in appendix II. USDA generally agreed with our findings, conclusions and recommendations. Specifically, USDA agreed that it has not fully implemented recommendations in our previous reports aimed at resolving the department’s telecommunications management weaknesses and agreed that the department can improve by placing greater emphasis on planning and coordination of its telecommunications program. USDA also stated that the department has made real progress in telecommunications management and has achieved significant savings, but did not disagree that USDA continues to miss savings opportunities and cannot ensure that telecommunications resources are cost-effectively managed across the department. In its comments, USDA provided details on actions it is taking to address telecommunications problems we identified, but did not specifically state whether or how the department plans to implement our recommendations. In subsequent discussions with USDA, the Deputy CIO stated that the department plans to fully address and implement all our recommendations. USDA also raised several additional matters, none of which affect our conclusions and recommendations and thus are not discussed here. These matters and our responses are discussed in appendix II. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from the date of this letter. At that time we will send copies to the Secretary of Agriculture; the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs, the Senate and House Committees on Appropriations, and the House Committee on Government Reform and Oversight; the Director of the Office of Management and Budget; and other interested parties. Copies will also be made available to others upon request. Please contact me at (202) 512-6408 if you or your staff have any questions concerning this report. I can also be reached by e-mail at willemssenj.aimd@gao.gov. Major contributors to this report are listed in appendix III. Our objective was to determine what actions USDA has taken to address the telecommunications management problems we identified in 1995 and 1996 and to what extent these problems have been resolved. To address our objective, we reviewed studies, reports, plans, and other documentation describing USDA’s actions to address our recommendations for (1) correcting telecommunications management weaknesses, (2) identifying and acting on opportunities to consolidate and optimize FTS 2000 telecommunications services, (3) planning networks in support of information and resource sharing needs, and (4) resolving telephone abuse and fraud. We also interviewed CIO, CFO, and agency officials to confirm our understanding of actions taken by the department and to identify whether the actions were complete, underway, or planned. We did not independently verify the accuracy of USDA’s overall telecommunications costs or projected cost savings. To identify USDA’s efforts to improve telecommunications management, we examined departmental responses to our report recommendations, USDA FMFIA and interagency task force reports, and reengineering and other studies. We also reviewed project plans, status reports, and other documentation pertaining to telecommunications management improvement initiatives to identify the status of these actions, and we discussed plans for completing them with CIO, agency, and project team officials who are responsible for carrying them out. We also reviewed other actions taken by USDA to address our recommendations on specific telecommunications management problem areas. To assess the effectiveness of USDA efforts to disconnect telecommunications services at closed offices, we discussed the implementation of revised policy in this area with CIO officials and reviewed procedures followed at two agencies that recently closed offices. In addition, we met with CIO, agency, and contractor officials involved with USDA’s 1996 inventory pilot and reviewed project reports and other documentation to determine the results and savings achieved. We also discussed USDA’s ongoing one-time audit and procedures used for selecting and auditing billing invoices with officials at USDA’s National Finance Center. To test the thoroughness of the audit, we randomly selected several invoices and discussed actions taken to verify billing data on these invoices with the appropriate agency officials. We also reviewed reports and billing data associated with other cost-savings efforts to eliminate unused FTS 2000 e-mail boxes and met with CIO and agency officials to discuss current and future plans for establishing telecommunications inventories. To assess efforts by USDA to consolidate and optimize FTS 2000 telecommunications services, we reviewed reports and examined billing data showing the results of USDA’s Initiative 6 project. We also discussed the overall results of this initiative with CIO and agency officials. We examined documentation and billing data associated with USDA’s recent effort to reactivate Initiative 6 and discussed the status of efforts to achieve savings with CIO and agency officials. To assess departmental requirements to preclude agencies from purchasing redundant FTS 2000 telecommunications services, we reviewed procedures established by the department under the November 1996 moratorium and new centralized management structure for ordering FTS 2000 services and discussed their impact on purchases of redundant service with CIO officials. To assess USDA’s efforts to plan integrated networks that address the department’s information and resource sharing needs, we reviewed reports showing agency network purchases. We also reviewed USDA’s information systems technology architecture and discussed it with CIO officials to determine the extent to which the architecture defines information sharing needs. Finally, we reviewed the department’s plans for implementing an enterprise network, including the interim results of a contractor’s network design evaluation of telecommunications traffic and performance, and discussed the extent to which USDA’s enterprise network plans address departmental information and resource sharing needs. To assess USDA’s efforts to address telephone abuse and fraud in the Washington, D.C., area, we reviewed status reports, internal memos, and other documentation describing actions implementing collect call blocking and establishing billing review procedures. We also discussed these actions with CIO officials who monitor telephone abuse in Washington, D.C., and reviewed documentation on the results of these monitoring efforts to determine whether USDA actions were effective in reducing improper collect calls from correctional institutions and other forms of telephone abuse. In addition, we discussed the extent to which USDA had addressed the risks of telephone abuse and fraud departmentwide with the CIO official responsible for telecommunications operations. To confirm our understanding of USDA actions to address our recommendations and resolve telecommunications management weaknesses, we discussed the results of our work with USDA’s CIO, as well as with representatives from the CIO and CFO offices. We performed our audit work from August 1997 through April 1998, in accordance with generally accepted government auditing standards. Our work was done at USDA headquarters in Washington, D.C.; USDA’s National Finance Center in New Orleans, Louisiana; and USDA Telecommunications Services and Operations offices in Fort Collins, Colorado. We also met with contractor representatives who conducted the inventory pilot in Annapolis, Maryland and we interviewed telecommunications officials at two agency offices where telecommunications and network planning activities are administered, which included the Animal and Plant Health Inspection Service headquarters in Riverdale, Maryland, and the Agricultural Research Service in Greenbelt, Maryland. The following are GAO’s comments on the Department of Agriculture’s letter dated June 15, 1998. 1. We modified the report as appropriate to more accurately reflect the agency official’s title. 2. Regarding the inventory pilot, USDA stated that the report does not mention that one agency completed a more thorough analysis of the lines and found that many of the lines identified by the contractor as not in use or inactive were in fact needed for various agency mission requirements. Therefore, USDA stated that the $200,000 in annual overbillings identified by the contractor may have been overstated. While we agree that there may have been cases where the contractor’s findings were overstated, USDA did not investigate many of the overbillings identified by the contractor to determine the total actual savings possible, nor did it act to ensure that all unneeded or unused services were eliminated. 3. Our statement is accurate. The department explains that it gained valuable experience through Initiative 6, but does not dispute the facts that USDA took no action on nearly half of the cost savings opportunities identified under Initiative 6 and that the initiative was terminated. 4. USDA agreed that it is desirable to develop an enterprise network based on a comprehensive business architecture and contends that it currently has a high-level business architecture in place that is based on USDA’s strategic plan and forms the basis for the definition of requirements for an enterprise network. USDA also strongly believes that further development of the department’s business architecture and development of the enterprise network must continue as a coordinated and integrated effort and, given that the current telecommunications infrastructure is fragmented and expensive to maintain, it does not make business sense to slow the pace of developing an enterprise network. Therefore, as the department moves forward on its enterprise network, USDA stated that it intends to update the architecture to include additional information on business data needs and information sharing requirements and to reassess telecommunications requirements as a matter of ongoing business practice. USDA’s position is inaccurate and misses the point of our recommendation. The department’s current architecture is incomplete. For example, it does not identify many of the kinds and/or types of data used in the department and does not provide a clear foundation for a seamless flow of information and interoperability among all agency systems that produce, use, and exchange information. As a result, it cannot provide an adequate basis for defining requirements for an enterprise network. By moving forward on an enterprise network without completing the architecture, USDA risks repeating past mistakes, i.e., investing in telecommunications that do not effectively support the department’s strategic business/program and operational needs. Troy G. Hottovy, Senior Information Systems Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Agriculture's (USDA) efforts to improve its management of telecommunications resources and act on opportunities to achieve savings. GAO noted that: (1) USDA has taken positive steps to begin correcting its telecommunications management weaknesses--improvements that the department says could reduce its $200 million-plus reported annual investment in telecommunications by as much as $70 million each year; (2) for example, USDA conducted a departmentwide reengineering study and is beginning to test a redesigned approach for managing telecommunications resources; (3) USDA has also taken action to eliminate some redundant services and reduce costs; (4) however, USDA has not achieved significant cost savings or management improvements because many of the department's corrective actions are incomplete or inadequate; (5) specifically, USDA has not: (a) established the sound management practices necessary for ensuring that telecommunications resources are cost-effectively managed and payments for unused, unnecessary, or uneconomical services are stopped; (b) consolidated and optimized telecommunications to achieve savings where opportunities exist to do so; (c) adequately planned integrated networks in support of information sharing needs; and (d) determined the extent to which the department is at risk for telephone abuse and fraud and acted to mitigate those risks nationwide; (6) further, it is unclear how and when these needed corrective actions will be implemented because the department has not established an effective action plan or strategy for addressing GAO's recommendations with timeframes, milestones, and resources for making improvements; and (7) a major factor contributing to this situation is that no one at USDA has been given overall responsibility, authority, and accountability for fixing USDA's long-standing telecommunications management problems.
Guam and CNMI natives are U.S. citizens, and many serve in the U.S. military. Upon discharge from the U.S. military, veterans, based on their eligibility, can obtain health care at VA facilities or from non-VA providers through VA sharing agreements. Guam is a 212-square-mile island located roughly 6,000 miles west of the continental United States and 1,500 miles southeast of Japan. Guam was ceded to the United States in 1898 and became a territory in 1950. Since its cession, it has had important U.S. military significance, given its strategic location in the Pacific Ocean. In 1995, the population of Guam was estimated at 149,249. As of fiscal year 1997, there were about 9,400 veterans living on Guam and CNMI and about 20,000 military beneficiaries living on Guam. CNMI is a self-governing commonwealth of the United States. The people of CNMI were granted U.S. citizenship in 1986. CNMI consists of 14 islands with a total land area of about 184 square miles; its main island of Saipan is located about 100 miles northeast of Guam. In 1995, CNMI’s population was estimated at 59,913 persons. While CNMI is currently considered part of VA’s domestic program, the Director of VA’s Health Administration Center, which administers the Foreign Medical Program, recently requested a legal opinion from VA’s General Counsel to determine whether veterans residing in CNMI are entitled to benefits under VA’s domestic program or whether they should be covered by VA’s Foreign Medical Program. At the time our report was issued, however, VA’s General Counsel had not yet made a determination on the legal status of veterans residing on CNMI. Figure 1 illustrates the location of Guam and CNMI in relation to the U.S. mainland and Japan. VA provides health care services to its veterans on a priority basis, depending on factors such as the presence and extent of a service-connected disability, income level, duration of military service, and type of discharge from the military. VA assigns each veteran to one of seven priority groups it established for providing health care: (1) veterans with service-connected disabilities rated at 50 percent or higher; (2) veterans with service-connected disabilities rated at 30 or 40 percent; (3) former prisoners of war and veterans with service-connected disabilities rated at 10 or 20 percent; (4) catastrophically disabled veterans and veterans receiving increased nonservice-connected disability pensions because they are housebound or need the aid and attendance of another person to accomplish the activities of daily life; (5) veterans unable to defray the cost of medical care; (6) all other veterans in the so-called “core” group, including veterans of World War I and veterans with a priority for care based on presumed environmental exposure; and (7) all other veterans. VA recently implemented a change that restricted access to VA health care for some veterans in the Pacific region. In October 1997, VA began phasing out the medical care offered to Pacific region veterans in priority group 7—veterans who have no compensable service-connected disabilities and annual incomes above the statutory threshold. This change affected veterans residing in VA’s Pacific Islands region, including about 30 veterans on Guam. According to VA officials, this change was made as a result of increasing medical costs and declining budgets; these officials stated that VA needed to make this change in order to continue serving Pacific region veterans with service-connected disabilities. In 1996, VA created 22 Veterans Integrated Service Networks (VISN) to serve as the basic budgetary and decisionmaking units in VA’s health care system for veterans within their geographic boundaries. VISN-21 has geographic responsibility for Northern California and VA’s Pacific Islands region. It relies on the VA Medical and Regional Office Center (VAMROC)—located in Honolulu, Hawaii—to oversee health care and other veterans’ benefits for veterans living in the Pacific Islands region of Guam, CNMI, the Hawaiian Islands, and American Samoa. In addition to its outpatient clinic on Guam, VA has a sharing agreement with the U.S. Naval Hospital to provide inpatient, specialty outpatient, and ancillary health care services to veterans. The U.S. Naval Hospital opened on Guam in 1954. Its primary mission is to provide medical support to forward-deployed military personnel and U.S. ships in the Pacific and to respond to wartime medical casualties. It also responds to medical emergencies and disasters, such as caring for typhoon victims and survivors of the recent Korean Airlines plane crash on Guam. In 1996, responding to a congressional mandate, the U.S. Navy studied the possibility of establishing a VA inpatient facility within the U.S. Naval Hospital on Guam to serve the health care needs of veterans. The Navy analyzed VA inpatient admissions at the U.S. Naval Hospital from fiscal years 1992 through 1995 and determined that, on average, less than one VA beneficiary received inpatient care at the hospital each day. The Navy also found that these few patients were integrated into normal hospital operations and were cared for in the hospital location most appropriate to their medical condition. The Navy concluded that VA inpatient workload data did not support the establishment of a veterans’ inpatient facility at the U.S. Naval Hospital. (See app. II for the March 1996 Navy report.) Veterans residing on Guam and CNMI receive VA health care through a network of providers, including outpatient care provided through the VA clinic, inpatient and specialty care provided at the U.S. Naval Hospital, and other specialty health care through Guam’s private sector. When certain care, such as cardiac care, is not available on Guam, veterans are sent via aeromedical evacuations to VA, military, or private hospitals in Hawaii or the continental United States. Although veterans we spoke with on Guam would prefer that the U.S. Naval Hospital provide cardiac care to avoid medical evacuations, the annual cardiac workload does not meet DOD’s minimum workload requirement for this specialty. According to DOD officials, this requirement is needed to maintain the skill level of cardiac specialists and ensure that quality of care is not compromised. Veterans seeking health care on Guam or CNMI typically enter the VA health care system through VA’s outpatient clinic. If they cannot receive the needed treatment there, they are referred to one of several providers, depending on the type and availability of care needed. According to VAMROC records and officials, during fiscal years 1995 through 1997, VA spent an average of $1.2 million per year to provide health care to Guam and CNMI veterans. The VA outpatient clinic is staffed by one full-time internal medicine physician, one part-time psychiatrist under contract to VA, one full-time psychiatric clinical nurse, and two administrative staff. As the primary point of entry for veterans seeking medical care, the clinic conducts eligibility determinations and provides outpatient services, such as primary care and psychiatric treatment. According to veteran satisfaction surveys from 1995 through 1997, nearly all veterans were very or extremely satisfied with VA care at the clinic. Over the past 3 years, the number of veterans seeking care through VA’s outpatient clinic on Guam has increased by 24 percent—from 562 veterans in fiscal year 1995 to 697 in fiscal year 1997. According to VA’s outpatient clinic administrator, this increase is partially due to increased outreach by VA and veteran service organizations on Guam to inform veterans of available health care and encourage them to use the clinic. When veterans on Guam or CNMI require inpatient, specialty outpatient, or ancillary health care services, such as general surgery, preventive medicine, or pharmacy, VA refers them to the U.S. Naval Hospital. In emergency situations, veterans may be treated in or directly admitted to the hospital. During fiscal years 1995, 1996, and 1997, the number of veteran inpatient admissions to the U.S. Naval Hospital were 43, 42, and 36, respectively, representing an average of less than one veteran inpatient admission per week. The hospital’s current total bed capacity is 146 beds (29 active and 117 inactive), with an expanded wartime capacity of 266. The hospital currently provides a number of surgical, medical, and ancillary services. (See table 1.) The U.S. Naval Hospital on Guam, in some instances, also uses telemedicine as a way to enhance the health care it provides to both military beneficiaries and veterans. Telemedicine is used to transfer patient data—via text, image, and video—among DOD military facilities. The U.S. Naval Hospital is participating in telemedicine with Tripler Army Medical Center in Hawaii in areas such as cancer tumor diagnosis, telepathology, and teleradiology. For example, U.S. Naval Hospital and Tripler physicians meet weekly via teleconferencing to discuss medical cases for U.S. Naval Hospital patients with tumors and examine possible treatment options using current data, which are exchanged over a computer network. If VA’s outpatient clinic or the U.S. Naval Hospital cannot readily provide care to a veteran, VA may refer the veteran to the private medical sector on Guam for treatment. For example, VA occasionally refers veterans to physicians on Guam for ear, nose, and throat care because the demand for this care is high and the U.S. Naval Hospital’s outpatient specialty clinic sometimes does not have an adequate number of physicians available to treat these conditions. In addition, the U.S. Naval Hospital shares ancillary services such as magnetic resonance imaging and other specialized equipment with the island’s one private hospital, Guam Memorial Hospital. When veterans require health care that is not available on Guam, VA will send them (as DOD does for its military beneficiaries) via a military or commercial aircraft to a VA, military, or private hospital in Hawaii or the continental United States. Regularly scheduled military evacuation flights are provided twice per week from Guam to Hawaii or the continental United States. Because of the routing military evacuation aircraft follow, it can take over 24 hours for the veteran to reach the destination; however, if the condition requires immediate medical attention, a special military medical evacuation can be arranged. In addition to military aircraft flights, medical evacuations via commercial airlines are available to veterans on Guam. For example, according to VA’s outpatient clinic administrator, a commercial airline is used when a veteran does not possess a U.S. passport that would allow entry into Japan, which is necessary on military medical evacuation flights. On nonstop commercial flights, it takes about 7 hours for veterans to reach Hawaii from Guam. During our discussions with representatives of veterans organizations about VA health care on Guam, they told us that medical evacuations were inconvenient because of the lengthy flight times associated with medical evacuations and the time evacuees spent away from their families. These representatives told us that veterans would prefer to have cardiac surgery available at the U.S. Naval Hospital to eliminate the need for evacuations for cardiac care. Establishing a cardiac surgery capability at the U.S. Naval Hospital, however, would require much more demand for these procedures than currently exists in order to provide sufficient quality. According to DOD requirements for cardiac surgical procedures, such as coronary bypass and cardiovascular procedures, standards set by the American Board of Cardiothoracic Surgeons and the Health Care Financing Administration require that a hospital perform or expect to perform a minimum of 150 surgical procedures per year to begin providing or maintain this medical specialty. According to DOD officials, these standards are necessary to ensure enough workload to maintain the specialists’ skill level and the resultant quality of care. Overall, the combined military beneficiary and veteran inpatient workload for cardiac care on Guam does not meet DOD requirements for establishing a cardiac surgery unit at the U.S. Naval Hospital on Guam to ensure quality of care. According to VA and DOD records, in fiscal years 1995 through 1997, a total of 1,140 medical evacuations were provided—1,071 for military beneficiaries and 69 for veterans. Cardiac care, which is the most frequently cited reason for medical evacuations, accounted for 15 percent of these evacuations—on average, about 56 per year. The remaining 85 percent were for various medical reasons, including orthopedic, neurological, renal, oncology, and psychiatric treatment. While representatives of veterans organizations on Guam expressed concern about the future availability of health care on Guam, DOD and VA officials believe that VA’s network for providing outpatient care, inpatient care, and medical evacuations will continue into the future even if there is an increase in demand for these services. With the aging of the veteran population, if current treatment patterns (in terms of patient treatment rates and average lengths of stay) do not change, these veterans’ projected use of inpatient health care could increase from the current one-half bed per day to a little over one bed per day, on average, by the year 2010. If veteran demand for health care on Guam and CNMI mirrored one of the highest utilization rates in the VA system, then use of inpatient care could increase to 14 beds per day by 2010. However, given its current capacity and workload and a continued sharing agreement with VA, the U.S. Naval Hospital should be able to absorb even this unlikely increase in veteran demand for inpatient care. In our discussions with representatives of veterans organizations on Guam, concern was raised about potential downsizing at the U.S. Naval Hospital. This concern may stem from the fact that since 1993, the U.S. military presence on Guam has downsized approximately 17 percent in active duty personnel and dependents. In addition, other than health care provided by the VA and U.S. Naval Hospital health care systems, health care options on Guam are limited. For example, there is only one other hospital on Guam. However, both VA and DOD officials told us that veterans will continue to have access to outpatient and inpatient care through VA, the U.S. Naval Hospital, and the private sector on Guam. VA and DOD recently renewed their sharing agreement at the U.S. Naval Hospital for an additional 5 years. The U.S. Naval Hospital’s budget is projected to remain stable through fiscal year 2001, and hospital officials stated that they do not plan to reduce the total bed capacity or the number of medical specialties currently available to veterans at the hospital. Finally, DOD and VA officials expect that necessary medical evacuations—both commercial and military—will continue to be available to Guam and CNMI veterans. Although our projections show a slight decrease in the Guam and CNMI veteran population from 1990 through 2010, these veterans may demand more VA health care in the future. In 1990, the combined veteran population on Guam and CNMI was 8,526, according to U.S. Census data. Using VA’s veteran population projection methodology, our analysis indicates that this veteran population peaked at about 9,400 veterans in 1996 and will steadily decline to 8,406 in 2010. This represents a 1.4 percent decrease from 1990 and an 11 percent decrease from its peak population in 1996. Although Guam and CNMI veterans are relatively young compared to the veteran population nationwide, they will likely require more frequent and intensive medical care as they age over the next decade. In 1990, only about 41 percent of the veterans on Guam and CNMI were older than 45 years; by fiscal year 2010, over three-quarters—or about 77 percent—of these veterans are projected to be 45 years or older. As indicated by historical inpatient utilization at the U.S. Naval Hospital on Guam, veterans aged 35 to 44 had 2.6 inpatient admissions per 1,000, while veterans aged 45 to 54 had 4.4 inpatient admissions per 1,000. Corresponding lengths of stay also increased. To estimate the potential increase in veteran demand for VA inpatient health care in the future, we analyzed a high-demand scenario and a low-demand scenario. Our low-demand scenario assumes that the current level of veteran demand for VA inpatient care on Guam—one of the lowest utilization rates in the VA system—will continue into the future, adjusted for aging of the Guam and CNMI veteran population. Under this scenario, we estimate that by the year 2010, these veterans could potentially need 1.01 inpatient beds per day, on average, up from the 1997 utilization of about 0.5 beds per day, on average. Our high-demand scenario assumes that the veteran demand for VA inpatient care on Guam would mirror that on Puerto Rico—which has one of the highest utilization rates in the VA system—adjusted for aging of the Guam and CNMI veteran population. Under this scenario, we estimate that by the year 2010, Guam and CNMI veterans could potentially need up to 14 inpatient beds per day, on average. With a current capacity of 146 beds—consisting of 29 active and 117 inactive beds—U.S. Naval Hospital officials believe that the hospital could handle even the upper limit of a projected increase in future veteran inpatient workload. In fiscal year 1997, the hospital needed, on average, about 23 beds to care for all its patients, including veterans. U.S. Naval Hospital officials told us that the hospital could handle even the highest potential veteran inpatient need, projected under the high-demand scenario of up to 14 inpatient beds by the year 2010. Although only 29 beds are currently staffed and equipped, U.S. Naval Hospital officials are confident that—using VA reimbursements for veteran inpatient care—they could activate beds and hire additional staff to care for these veterans, if needed. U.S. Naval Hospital officials told us that their hospital has historically met VA’s veteran inpatient and specialty outpatient care needs with existing staffing. Further, DOD officials explained that, while unlikely, the only factors that may limit the hospital’s ability to provide health care services to veterans would be (1) war, (2) lack of providers for specialized care, (3) operational commitments, (4) downsizing of staff, (5) cuts in funding, and (6) increased military presence on Guam. Apart from a large conflict or war, which they could not predict, Navy officials felt confident that they had or could obtain sufficient resources to handle any likely increase in veteran inpatient workload. According to VA officials, establishing a 14-bed VA inpatient facility could range between $3.7 million to $6.9 million in construction costs, depending on whether the facility is renovated or newly constructed. In addition, it would cost at least $4 million annually to operate such a facility. Further, VA’s average annual cost to purchase the care equivalent to the 14 inpatient veteran beds from the U.S. Naval Hospital under the current sharing agreement between VA and DOD is about $3.7 million. According to VA officials, if space were available within the U.S. Naval Hospital and no significant upgrades were required by the year 2010, such as adding structural support to make the facility safer during earthquakes, the estimated cost to renovate approximately 12,000 square feet of space for a VA inpatient facility would be about $3.7 million. This existing space would have to be modified to make it suitable for inpatient health care activities. However, if by the year 2010, space were not available within the U.S. Naval Hospital or significant seismic upgrades were required, the estimated cost to construct and outfit a 14-bed VA inpatient facility adjacent to the hospital would be about $6.9 million. If a future engineering assessment concluded that a seismic upgrade were required, VA officials told us that renovating the space within the U.S. Naval Hospital could cost more than constructing a new facility. To determine the average annual operating cost of a possible new veterans’ inpatient facility at the U.S. Naval Hospital, VA officials estimated that a 14-bed inpatient facility would need four physicians and 23 other staff (primarily nurses), at an annual cost of $2.8 million. Other annual operating costs would include ancillary services; other expenses, such as laundry and food service; housekeeping, maintenance, and utilities; and overhead. When added together, staffing and other operating costs total an estimated annual operating cost of at least $4 million. Further, we estimated VA’s average annual cost to purchase the care equivalent to the 14 inpatient veteran beds from the U.S. Naval Hospital under the existing sharing agreement between VA and DOD. Currently, when veterans obtain inpatient care at the U.S. Naval Hospital, VA reimburses the U.S. Naval Hospital for this care based on actual veteran admissions. Based on VA’s historical expenditures per veteran admission, by age category, we estimated that, under the high-demand scenario, VA’s annual costs to deliver care to these same veterans would be about $3.7 million. In its March 1996 report, the Navy concluded that a VA inpatient wing was not needed due to the low veteran inpatient workload, and our recent work confirms that the veteran inpatient workload averages less than one bed per day. Also, in the unlikely event that Guam and CNMI veteran demand for services increased significantly, U.S. Navy officials believe that the U.S. Naval Hospital will be able to meet even the highest projected workload. Last, constructing a new VA inpatient facility or renovating space within the U.S. Naval Hospital would cost from $3.7 to $6.9 million, with additional annual operating costs of at least $4 million. While veterans consider evacuations inconvenient and would like the U.S. Naval Hospital on Guam to offer cardiac surgery procedures to reduce the number of evacuations, the veteran and military beneficiary population on Guam and CNMI has required far fewer than the minimum 150 procedures per year recommended by DOD guidance to ensure acceptable quality of care. Without sufficient workload to maintain the skills of the surgeon and other supporting team members, the U.S. Naval Hospital on Guam would not be able to offer cardiac surgery and ensure quality of care. We provided a draft of this report to DOD and VA for official comments. DOD and VA agreed with the report’s findings. DOD also provided one technical change, which we incorporated. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 13 days from the date of this letter. At that time, we will send copies to the Secretary of Veterans Affairs, the Secretary of Defense, and interested congressional committees. We will also make copies available to others upon request. If you have any questions about this report, please call me at (202) 512-7101 or Ronald J. Guthrie, Assistant Director, at (303) 572-7306. Other major contributors to this report were Lisa P. Gardner, Dawn Shorey, Paul Reynolds, Deborah Edwards, Alicia Cackley, Karen Sloan, and Sylvia Shanks. We were asked to (1) describe how VA currently meets Guam and CNMI veterans’ health care needs, (2) estimate these veterans’ possible future demand for health care and assessed VA’s ability to meet this demand, and (3) estimate the cost to establish a veterans’ inpatient ward at the U.S. Naval Hospital on Guam. To determine how VA meets Guam and CNMI veterans’ health care needs, we met with and obtained information from DOD and VA officials in Washington, D.C.; Hawaii; and Guam. We also reviewed and analyzed relevant laws and regulations pertinent to VA’s responsibility and authority to provide care to veterans on Guam and CNMI. Although the legal opinion from VA’s General Counsel regarding the status of CNMI veterans—whether they are entitled to benefits under VA’s domestic program or should be covered by VA’s Foreign Medical Program—is still pending, the decision would not affect the outcome of our analyses in this report. To learn more about VA and DOD policies and practices for providing health care to veterans on Guam and CNMI, we contacted VA and DOD officials stateside and on Guam. Specifically, we contacted VA officials at VA Headquarters in Washington, D.C.; VISN-21 in northern California; the VA Medical and Regional Office Center in Honolulu, Hawaii; and VA’s outpatient clinic on Guam. We contacted DOD officials at the Navy’s Bureau of Medicine and Surgery in Washington, D.C.; the U.S. Pacific Command in Hawaii; Tripler Army Medical Center in Hawaii; and the U.S. Naval Hospital on Guam. We also reviewed VA and DOD documents on veteran health care policies, practices, and eligibility as well as budget data. We compiled and analyzed (1) the cost of health care for the last 3 fiscal years provided at the VA outpatient clinic on Guam and the U.S. Naval Hospital, (2) referrals to private sector providers, and (3) medical evacuations to Hawaii or the continental United States. We further analyzed the frequency and medical reasons for medical evacuations provided to veterans and military beneficiaries on Guam. We did not verify the reliability of VA or U.S. Naval Hospital medical evacuation data. We also met with officials of the Government of Guam Veterans Affairs Office and with Guam representatives of the Veterans of Foreign Wars, Vietnam Veterans of America, and American Legion to better understand and describe veterans’ concerns about their VA care. During our meeting with the Guam Veterans Affairs Office, we reconciled differences between its veteran population estimate and the estimate from the Guam 1990 Census data. We also reviewed Guam VA outpatient clinic satisfaction survey results for the last 3 years. We further met with Guam Memorial Hospital officials to discuss health care issues on Guam and the hospital’s accreditation status. To assess VA’s ability to meet our projected demand, we interviewed VA, DOD, Air Force, and Navy officials and reviewed DOD staffing estimates and U.S. Naval Hospital budget projections. To determine Guam and CNMI veterans’ possible demand for health care in the future, we estimated the current veteran population on Guam and CNMI and analyzed possible changes in level of veteran demand for care and patterns of inpatient utilization. We projected Guam’s total veteran population to the year 2010 by adjusting 1990 Census data to reflect the aging of the current population since 1990 and recent and expected future separations from the military.We relied on survival data obtained from the Government of Guam Department of Public Health and Social Services and separation data obtained from VA’s Office of Policy and Planning for this projection. To estimate how much VA inpatient care veterans on Guam and CNMI could potentially require over the next decade, we developed two different health care demand scenarios, based on actual low and high veteran inpatient utilization rates within the VA system. These scenarios represent a range of potential demand and are not intended to predict a specific future demand. We then used VA’s inpatient planning model and Puerto Rico and Guam current veteran inpatient utilization rates (patient treated rates and average lengths of stay) to compute total bed days of care and inpatient bed requirements for both the low- and high-demand scenarios.Both scenarios age the veteran population through the year 2010 and provide for the same type of hospital beds (medical, surgical, and intensive care) that are currently available at the U.S. Naval Hospital. To estimate the cost of a VA inpatient facility at the U.S. Naval Hospital on Guam, VA prepared two cost estimates for a 14-bed VA inpatient facility within the U.S. Naval Hospital on Guam—one estimate was for renovating the existing space, the other was for new construction. These estimates provided 10 medical or surgical beds and four intensive care beds, all fully outfitted and within 11,588 square feet (VA’s space planning criteria). VA officials adjusted its renovation and construction cost estimates to reflect that construction on Guam is twice as expensive as in the continental United States. Annual operating costs for either VA inpatient facility would consist of staffing; ancillary services; other expenses, such as laundry and food service; housekeeping, maintenance, and utilities; and overhead. VISN-21 estimated staffing costs for 27 VA staff, including 4 physicians. We estimated ancillary and other expenses using the U.S. Naval Hospital average costs per bed day of care in 1997 multiplied by the projected number of VA inpatient bed days of care for 2010. We also estimated housekeeping, maintenance, and utilities based on the U.S. Naval Hospital costs per square foot multiplied by the square footage of the proposed VA facility. We included overhead costs equal to 10 percent of total operating costs. Both a VA and a Naval Hospital official reviewed the methodologies we used to estimate ancillary and other costs and concluded that the approaches would result in a conservative estimate of the potential costs. Last, we computed the cost to obtain inpatient care required by the projected high- and low-demand scenarios under the existing VA and DOD sharing agreement at the U.S. Naval Hospital. Our estimate was derived by calculating a 3-year historical average cost per veteran admission at the U.S. Naval Hospital by age category. The resulting historical average cost by age group was then applied to the high- and low-demand veteran admissions by age group in 2010. VA and Navy officials reviewed the estimated construction and staffing costs obtained from VA. All cost estimates are in current 1998 dollars. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the need for establishing a Department of Veterans Affairs (VA) inpatient facility on Guam, focusing on: (1) how VA currently meets Guam and the Commonwealth of the Northern Mariana Islands (CNMI) veterans' health care needs; (2) veterans' possible future demand for health care and VA's ability to meet this demand; and (3) the cost to establish a veterans' inpatient ward at the U.S. Naval Hospital on Guam. GAO noted that: (1) to meet the health care needs of veterans on Guam and CNMI, VA currently provides services through a network of providers; (2) this network includes outpatient and inpatient care provided on Guam as well as by military or private hospitals in Hawaii or the continental United States, which is accessed through aeromedical evacuations; (3) in discussing their concerns about the VA health care system, veterans on Guam told GAO that medical evacuations, while necessary, are inconvenient and that they would like the U.S. Naval Hospital on Guam to provide cardiac care to reduce the need for some of these evacuations; (4) however, VA and Naval Hospital records indicate that only 15 percent of the 1,140 medical evacuations provided to military beneficiaries and veterans over the past 3 years were for cardiac care, which, according to Department of Defense officials, is an insufficient workload to maintain quality care for this specialty; (5) in the future, VA and Navy officials expect to be able to continue to meet veterans' demand for health care; (6) VA and Navy officials told GAO that they expect to continue providing the same type of health care to Guam and CNMI veterans that is currently available, including the services provided by the U.S. Naval Hospital; (7) even if there were a significant increase in veterans' demand for inpatient medical care in the future, U.S. Naval Hospital officials believe that their hospital could handle the potential veteran inpatient workload; (8) currently, the U.S. Naval Hospital has a total capacity of 146 beds--consisting of 29 active beds and 117 inactive beds; (9) in fiscal year 1997, of the 29 active beds, military beneficiaries used 22 beds per day on average and veterans used less than 1 on average; (10) GAO's analyses indicate that, under a high-demand scenario, Guam and CNMI veterans would use, on average, 14 inpatient beds per day; (11) while it is highly unlikely that Guam and CNMI veterans' demand for inpatient health care will ever reach this level, Navy officials told GAO that the U.S. Naval Hospital could hire staff and activate additional beds, if needed, to meet this demand; (12) these officials said that apart from a large conflict or war, which they could not predict, they were confident that the U.S. Naval Hospital on Guam could handle any likely increase in veteran inpatient workload; and (13) in light of GAO's analysis, establishing an inpatient ward at the U.S. Naval Hospital is not warranted and would be expensive.
In January 2012, IRS estimated that the gross tax gap—the difference between taxes owed and taxes paid on time—was $450 billion in tax year 2006. IRS estimated that it would eventually recover about $65 billion of this amount through late payments and enforcement actions, leaving a net tax gap of $385 billion. The tax gap has been a persistent problem in spite of extensive congressional and IRS efforts to reduce it. In past work we have said that reducing the tax gap will not likely be achieved through a single solution. Rather, the tax gap must be attacked on multiple fronts and with multiple strategies over a sustained period of time. On the enforcement front, IRS’s efforts to ensure compliance of individual taxpayers combine several distinct programs that collectively monitor and correct noncompliance with income tax filing, reporting, and payment requirements. These programs fill different roles in the enforcement process and vary in the number of taxpayers covered, the resources used, and their level of automation. IRS’s Math Error program electronically checks all filed tax returns for obvious math errors as returns are processed. The Math Error program reviews and adjusts items specifically listed in Internal Revenue Code section 6213. The specific issues that the program has authority to review include calculation errors, entries that are inconsistent with or exceed statutory limits, various omissions, inclusions, and entries of information, or incorrect use of an IRS table. IRS collects information on taxpayers from employers, financial institutions, and other third parties and compiles these data in the Information Returns Processing (IRP) system. The Automated Underreporter (AUR) program electronically matches the IRP data against the information that taxpayers report on their forms 1040 as a means of identifying potentially underreported income or unwarranted deductions or tax credits. The matching process takes place months after taxpayers have filed their tax returns. For tax year 2010, AUR identified approximately 23.8 million potential discrepancies between taxpayer income, deduction, and other information reported by third parties and the information supplied by taxpayers on their individual income tax returns. IRS officials said that resource constraints prevent them from contacting taxpayers for all of the cases in which discrepancies are identified. If a mismatch exceeds a certain tax threshold, AUR reviewers decide if it warrants a notice to the taxpayer asking for an explanation of the discrepancy or payment of any additional tax assessed. IRS guidance directs reviewers to consider the reasonableness of the taxpayers’ responses, but reviewers generally do not examine the accuracy of the information in the responses because they do not have examination authority. For certain issues, AUR reviewers may refer cases for a correspondence examination. The Automated Substitute for Return (ASFR) program uses data from the IRP system to identify persons who did not file returns, construct tax returns for certain nonfilers, and assess tax, interest, and penalties based on those substitute returns. IRS does not pursue all of the constructed returns. Potential cases fall into one of ten priority levels and are worked highest-priority first. ASFR officials said they make budget decisions by taking into account the resources available to the program and determine the level of new cases that will be worked over the following year. In fiscal year 2011, the ASFR program closed nearly 1.4 million cases. Correspondence examinations are formal audits of individual taxpayers but do not involve face-to-face meetings with taxpayers. Instead, these examinations target specific issues that are limited in scope and complexity, easily documented, and can be handled quickly and efficiently through correspondence between the taxpayer and the IRS examiner. Tax returns are selected as potential cases through automated business rules that filter or select tax returns according to predetermined criteria. These business rules can detect multiple potential issues, all of which can be worked through a single correspondence exam. Examiners have the authority to review additional issues on a return even if they were not identified by the automatic filters. Field examinations are conducted in face-to-face meetings between the taxpayer and the IRS examiner. These audits are targeted at individual returns with broader and more complex issues. Unlike correspondence examinations, the field examination program has a classification process where an experienced tax examiner will review a potential case to determine which, if any, issues should be examined. Individual tax returns are selected for field examination in a variety of ways. Some returns are selected in the pursuit of specifically identified compliance issues, such as abusive transactions or offshore compliance. Others are selected on the basis of a statistical formula that attempts to predict the potential for additional tax assessments, and yet others are selected randomly for research purposes. Regardless of why the return was initially selected for audit, an examiner will review the return in its entirety to determine if other issues are present. The responsibility for operating these individual taxpayer enforcement programs largely rests with IRS’s Small Business/Self-Employed (SB/SE) Division, which handles complex individual returns, and Wage and Investment (W&I) Division, which handles simpler returns. SB/SE operates parts of all four IRP and exam programs; W&I operates parts of three programs, excluding field examinations. Correspondence and field exams accounted for more than 80 percent of the total administrative costs of the four programs we reviewed over the 2-year period we examined. (Total costs include direct examination time, training and other offline activities of examiners, supervisory and administrative support, and other overhead costs allocable to each program.) Based on data for hourly costs and time spent on different types of cases that IRS provided, we estimated that the cost per case for field exams, $2,278, was many times greater than those for correspondence exams, $274, AUR, $52, and ASFR, $72. (See fig. 1.) IRS spent almost 20 percent of the $1.6 billion per year that it devoted to exams opened in 2007 and 2008 on returns with positive income of at least $200,000, even though such returns accounted for only 3 percent of the 136 million individual income tax returns filed per year. The share of total cost for these returns was greater than their share of total returns because they were examined at above average rates and, compared to lower-income returns, field exams were a greater proportion of their examinations. (See fig. 2.) For the 2 years of cases we reviewed, exams (both correspondence and field) of taxpayers with positive incomes of at least $200,000 produced significantly more direct revenue per dollar of cost than exams of lower- income taxpayers. Across income groups, correspondence exams were significantly more productive than field exams in terms of discounted direct revenue per dollar of cost. (See fig. 3 and table 1 in app. II.) We estimated that the average direct revenue yield per dollar of cost across all correspondence exams of individual taxpayers was $7. In contrast, the average direct yield per dollar for field exams of individual taxpayers was $1.8. We also estimated that the direct revenue per dollar of cost was about $22 for AUR cases and about $31 for ASFR cases. Exams that are more complicated than average are likely to require both more time to complete and more highly skilled examiners, who cost more per hour. In estimating the results for field exams in figure 3, we incorporated differences in the amount of time spent on each field exam, which is recorded in the ERIS database, but we did not account for differences in hourly costs relating to varying skill levels of examiners across cases because the data available for that purpose were limited.Nevertheless, to test the potential sensitivity of our results to this missing factor, we estimated an alternative set of field exam results, using an ERIS data element that reflects the expected difficulty of an exam. We also tested the effect of differences in locality pay for field examiners in different geographic locations. (See app. I for further details.) We found that adjusting for skill levels likely reduces some of the differences in direct revenue per dollar of cost across field exam categories; adjusting for location has a negligible effect. (See table 3 in app. II.) IRS would be able to estimate ratios of direct revenue to cost that better incorporate differences in the hourly costs across examiners with different skill levels if data from IRS’s timekeeping system that records the number of hours that each employee charged to specific exam cases were matched to revenue data for the same cases. Our analysis of a hypothetical reallocation of IRS examination resources for this 2-year period indicates that a shift of about $124 million in enforcement resources could have increased direct revenue by $1 billion over the $5.5 billion per year IRS actually collected. This result is based on shifting the $124 million from exams of lower-income returns with the earned income tax credit (EITC) and lower-income business returns without EITC to exams of higher-income returns and lower-income nonbusiness returns without EITC. The result holds true as long as the average ratio of direct revenue to cost for each category of returns remained unchanged. (See fig. 4.) Similar gains would recur annually, relative to the revenue that IRS otherwise would collect if it did not change its resource allocation and taxpayer behavior remained substantially the same. We took account of several constraints when designing our hypothetical resource reallocation example. First, we did not want to suggest a large- scale change because some reallocations cannot be made quickly, particularly if they require a different distribution of examiner skills than exists in IRS’s current workforce. The $124 million that we shifted represents less than 8 percent of the $1.6 billion per year that IRS devoted to examinations of individual tax returns for the 2 years we studied and we shifted less than 5 percent of existing field exam resources ($1.1 billion per year) to correspondence exams. Second, we did not want to end up with extreme coverage rates (either high or low) in any return category. Therefore, we did not reduce the combined coverage rate for any category for which the coverage rate was already close to or below 1 percent, and we kept the highest coverage rate (for returns with positive incomes of $1 million or more) under 11 percent.that 11 percent rate is almost twice the current rate for that category.) Finally, given that certain compliance issues can be reviewed effectively only through a field exam, we did not decrease field exam resources in any return category for which we increased correspondence exam resources. Exam resource reallocation can also affect tax collections indirectly by influencing the voluntary compliance of nonexamined taxpayers. These indirect effects are difficult to estimate and IRS has no empirical evidence that would allow it to say whether overall voluntary compliance would increase or decrease as a result of specific resource reallocations. Changes in exam coverage rates are generally believed to affect voluntary compliance by altering taxpayers’ perceived risks of being audited. The higher the risk of being audited, the less inclined taxpayers are to evade taxes. As shown in figure 5, our hypothetical reallocation would have increased combined coverage rates in most of the tax return categories we examined. For those categories in which coverage rates declined, the declines were relatively modest. For these reasons we believe that the direct revenue gains associated with our hypothetical reallocation would not likely be offset by significant indirect revenue losses. However, if larger resource allocations were considered, the lack of empirical evidence on the potential changes in voluntary compliance could leave IRS uncertain of the extent to which direct revenue gains might be offset by negative indirect revenue effects. Although research on this issue is challenging, IRS might be able to leverage its existing efforts to study voluntary compliance through the National Research Program (NRP) to get better information on the influence of enforcement activity on voluntary compliance. Our analysis focused upon ratios of average direct revenue to average cost. We did not incorporate other potentially important considerations due to data constraints. One such consideration is the extent to which the ratio of direct revenue per dollar of cost may decline for a particular category of exams as additional resources are devoted to that category. The revenue yield of each additional return that IRS exams within a particular return category may be lower than the average revenue- productivity rates we estimated, particularly if IRS’s return selection process for examinations results in returns with the greatest revenue potential being worked first and those with the least potential being worked last. Little is known about the relationship between marginal and average revenue and cost within specific return categories because IRS currently does not identify the marginal cases worked each year.IRS collects some information on marginal cases, such as how the broad characteristics of those returns that would likely be selected (or not selected) in a modest program expansion (or contraction) would differ from the average return actually audited now, planners would have to rely solely upon ratios of average direct revenue to average cost—a less accurate basis for estimating the direct revenue consequences of specific exam resource allocations. An analysis of the marginal revenue yields for specific categories of returns might also enable IRS to reduce the number of audits that result in no direct change in tax liability (although they may have beneficial effects on voluntary compliance). These no-change cases impose burdens on compliant taxpayers. Further, substantial variations across return categories in the percentage of exams that result in no change could be viewed as inequitable because compliant taxpayers in some categories have a greater chance of being burdened than compliant taxpayers in other categories. No-change rates in some higher-income return categories are already relatively high, compared to rates for lower- income categories. For example, the no-change rate for correspondence exams of tax returns with positive income of $1 million or more was about 53 percent for fiscal years 2007 to 2008. (See table 1 in app. II.) However, the highest no-change rates are associated with correspondence exams, which should be less burdensome than field exams. High no-change rates could also be associated with declining revenue yields in marginal cases; however, without a specific study of marginal cases, it is not possible to say whether no-change cases are concentrated among the last cases examined in a particular category or whether they are spread relatively evenly across exams worked throughout the course of the year. Factors other than revenue yields and IRS budget costs also matter for purposes of an overall cost-benefit evaluation of IRS exam activities. These activities impose compliance costs on taxpayers and economic efficiency costs on society. Return categories with low ratios of direct revenue to IRS budget costs could have offsetting advantages in terms of lower efficiency and compliance costs; however, no empirical evidence of variations in these other effects or costs across the return categories exists, nor would it be easy to obtain. (See app. III for further discussion of these tradeoffs.) The results of our analyses suggest that there is potential for IRS to increase the direct revenue yield of selected enforcement programs by hundreds of millions of dollars per year without significant (if any) adverse effect on the indirect effect that examinations have on revenues. However, our results are preliminary and limited in scope. The collection and analysis of additional data would help to both confirm our basic conclusion and assist IRS in more finely adjusting its resource allocation decisions. One priority would be to study the feasibility of estimating the marginal revenue and marginal costs within each program within each taxpayer group. It would be helpful, for example, to estimate at least how the broad characteristics of those returns that would likely be selected (or not selected) in a modest program expansion (or contraction) would differ from the average return actually audited now. Such information would help IRS assess the extent to which revenue productivity would likely decline, if at all, if more exam resources are devoted to a particular group of taxpayers. Another useful project would be to see if some linkage could be made between the amounts of time that specific examiners spend on each case and the revenue collection amounts for each case that are recorded in ERIS. Such a link would enable IRS to estimate ratios of direct revenue to cost that better incorporate differences in the hourly costs across examiners with different skill levels. The collection or estimation of other information that would be useful when allocating resources, such as the influence of enforcement activity on voluntary compliance, is challenging, which is why little is known about those topics to date. Nevertheless, IRS might be able to leverage its existing efforts to study voluntary compliance through the NRP to get better information on the influence of enforcement activity on voluntary compliance. In the absence of the additional data identified above, IRS planners can use the results of an analysis such as ours in combination with their professional judgment to decide whether the potential for direct revenue gains more than offsets the potential for reductions in indirect revenue or in equity and any increases in compliance or efficiency costs. If the answer is positive, they can adjust their allocation of resources accordingly. Nevertheless, the better empirical basis IRS planners have for making such judgments, the more confident they can be that they are allocating their limited resources to the best effect. To better ensure that IRS’s limited enforcement resources are allocated in a manner that maximizes the revenue yield of the income tax, subject to other important objectives of tax administration, such as minimizing compliance costs and ensuring equitable treatment across different groups of taxpayers, the Commissioner of Internal Revenue should: review disparities in the ratios of direct revenue yield to costs across different enforcement programs and across different groups of cases within programs and determine whether this evidence provides a basis for adjusting IRS’s allocation of enforcement resources each year. As part of this review, IRS should: develop estimates of the marginal direct revenue and marginal direct cost within each enforcement program and each taxpayer group; compile data on the amount of time that specific grades of examiners and downstream employees spend on specific categories of exams that can be identified in ERIS; and explore the potential of estimating the marginal influence of enforcement activity on voluntary compliance, potentially taking advantage of new NRP data. We requested written comments from the Commissioner of Internal Revenue and received a letter from IRS Deputy Commissioner for Services and Enforcement on November 29, 2012, (which is reprinted in app. IV). IRS agreed with our recommendations and agreed that the development of additional key data will require considerable work. In recognition of the time it will take to obtain this information, IRS said it will consider how to apply interim methods, findings, or approximations. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. In addition, the report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our principal analysis compares the costs and direct revenues associated with correspondence and field exams that were opened during fiscal years 2007 and 2008 across the principal categories of individual taxpayers that the Internal Revenue Service (IRS) uses for exam planning purposes. These categories, defined in terms of income size and the nature of items reported on the returns, are shown in table 1 of appendix II. IRS provided us with total cost estimates for correspondence exams (with and without the earned income tax credit (EITC)) and field exams (with and without EITC) from its Integrated Financial System. These total cost estimates included all direct examiner costs, training and other off-line activities of examiners, supervisory and administrative support, and other overhead costs allocable to each program. We estimated hourly costs by dividing the total costs by the average examination hours per case, which IRS provided for correspondence exams (with and without the earned income tax credit (EITC)) and field exams (with and without EITC) from its Audit Information Management System (AIMS). These hourly cost estimates are adequate for the relatively high-level comparisons we present in this report. IRS would be able to make more precise estimates for more detailed categories of exams if data from IRS’s timekeeping system that records the number of hours that each employee charged to specific exam cases were matched to revenue data for the same cases. To estimate the cost for each category of field exams we multiplied the hourly cost rates by the number of direct hours reported in the Enforcement Revenue Information System (ERIS) for each category of According to IRS officials the ERIS data relating to time spent on exam. correspondence exams is not reliable for our purposes. On their advice, given that neither the time spent on 1040 correspondence exams nor the skill level of examiners typically vary significantly from case to case, we used the same cost estimate for all cases, which IRS provided to us. We restated the costs for each case in terms of 2011 dollars by adjusting for inflation. Due to limitations of the ERIS data our cost estimates do not include any downstream costs that IRS’s Collections function may have devoted to these cases or any costs associated with examinations of pass-through entities that could have improved the productivity of some of these 1040 exams. We do not know whether the prevalence of these missing costs varies significantly across our exam categories; however, we do note that the one category we studied that specifically excluded returns with pass-through income had ratios of revenue to costs that were greater or equal to the ratios for the one category identified as likely to include such returns. We aggregated all of the tax, interest, and penalty collections recorded in ERIS for the same fiscal years, exam types, and taxpayer categories used for the cost side of our analysis. We also included amounts of refunds disallowed due to examinations in our definition of revenues. These amounts represent revenue saved for the government, even though IRS does not have to collect it after the exams. We compiled the revenue data for each fiscal year in which the collections were made (through the end of fiscal year 2011) and restated the revenue in terms of 2011 dollars by adjusting for inflation.collections over the gap between the fiscal year in which IRS incurred the exam costs (which we estimated as being the midpoint of the exam) and the fiscal year in which the revenue was collected. The purpose of this discounting, which is standard practice for cost-benefit analyses, is to Then we discounted the value of account for the time-value of money between the time at which the government bears the cost of an activity or investment and the time at which it receives the related benefit. (We used a real discount rate for this discounting because we had already adjusted all of our figures for inflation.) The first set of estimates that we present in this report do not reflect potential differences in costs across exams due to the degree of experience demanded of the examiner or the location at which the exam was conducted. For a second set of estimates we adjusted the cost of field exams for relative difficulty. To do this we used ERIS data on hours by grade to compute a weighted average pay rate for all exams (for each combination of EITC and non-EITC and field or correspondence for each year). We then adjusted costs for each record by multiplying the cost by the mid-point pay rate for the grade of the record, divided by the weighted average difficulty pay rate for the relevant year and EITC status. For a third set of estimates we made this difficulty adjustment for field exams and then we also adjusted the costs of both correspondence and field exams for location differences by using data on the location of exams, hours, and locality pay for each location to compute a national average locality pay rate (weighted by the number of hours in each location) for each combination of field and correspondence exams of returns with and without EITC and a locality pay rate for each location. We then multiplied the cost estimate for each exam by the ratio of the national rate over the relevant location-specific rate. The columns labeled “Change in Resources” in table 2 of appendix II show the amounts of IRS budget resources we moved out of or into specified exam categories for our hypothetical reallocation. These shifts were guided by the considerations we noted earlier. We estimated the revenue effect of each shift by multiplying the gain or loss of resources for each category by our estimated ratios of direct revenue to cost for those categories. We estimated the effect on the coverage rate within each category by multiplying the coverage rate prior to the reallocation by the percentage change in each category’s resources caused by the reallocation. Appendix II: Detailed Tables ($ millions) ($ millions) Change in resources ($ millions) Change in direct revenue ($ millions) Change in resources ($ millions) Change in direct revenue ($ millions) An economic cost-benefit evaluation of IRS’s overall activities would involve a comparison of the social costs and social benefits associated with those activities. IRS’s function is to collect tax revenue that the federal government transfers among citizens as cash payments or in the form of goods and services. The collection process imposes costs on society but produces no direct benefit itself. The government’s use of the collected revenue may ultimately produce a net benefit for society if the social value of that use exceeds the social cost of raising the revenue. IRS has no influence over how tax revenue is used; it can only contribute to increasing the net social benefit by increasing the amount of revenue collected for a given amount of social cost (or decreasing the social cost of raising a given amount of revenue). Specific resource allocation choices can be compared on the basis of the amount of revenue they produce for a given amount of total social costs. The social costs of tax collection comprise the following: Tax burden. This is the actual money collected from taxpayers. IRS budget costs. Amounts collected as a result of IRS enforcement activities from taxpayers who, otherwise, would have been noncompliant may have a zero social cost. The cost that those amounts represent can be attributed to the tax law, rather than to IRS enforcement efforts. If those additional amounts of taxes due are not collected, the tax burdens evaded by noncompliant individuals are offset by the additional taxes that compliant taxpayers must pay in order to support a given government budget. Compliance burden. IRS’s enforcement activities can affect the costs that taxpayers incur when complying with the tax law by increasing the time and money that they spend preparing their returns and interacting with IRS. Efficiency costs. IRS’s enforcement activities can alter the tax avoidance and evasion behavior of individuals, which affects the efficiency of resource allocation in the economy. If an enforcement activity increases the aggregate costs of tax avoidance and evasion, economic efficiency and the average standard of living is reduced. Conversely, if the activity reduces such aggregate costs, economic efficiency would improve. Equity costs. IRS’s resource allocation can affect how exam-related compliance burdens are distributed across different groups of taxpayers and also how the risk of noncompliant taxpayers getting penalized for evasion varies across groups. It is difficult to know what society as a whole would view as an equitable distribution of these burdens and risks; therefore it is difficult to assess the equity effects of any particular reallocation of resources. The only component of social costs that can be reliably measured is the IRS budget cost, and it is difficult to attribute even that cost to very specific enforcement activities (such as specific audits). Consequently, IRS planners cannot consider all types of social costs in a rigorously quantitative manner when making their resource allocation decisions. Economists use the term “margin” when referring to the scopes of the various types of decisions that individuals make. For example, if IRS examination planners were deciding how to allocate the last million dollars of their budget between different types of audits, the marginal social cost of the choice they made would be a million dollars, plus the sum of all other social costs resulting from the IRS activities supported by that million dollars. The marginal revenue would be the amount of additional tax collections attributable (both directly and indirectly) to those activities. The most economically efficient choice would be the one that produced the highest ratio of marginal revenue to marginal social cost. The ratio of marginal revenue to marginal social cost provides a basis for comparing the cost of collecting taxes by different approaches. Such comparisons can be made across broadly defined approaches (e.g., increasing taxpayer services to promote higher voluntary compliance versus increasing enforcement efforts to reduce noncompliance). Alternatively, as in this study, comparisons could be made across more narrowly defined alternatives (e.g., devoting more resources to audits of taxpayers with incomes below a certain amount versus devoting those resources to audits of taxpayers with incomes above that amount). In addition to the contact named above, James Wozny (Assistant Director), Kevin Daly (Assistant Director), Michael Brostek, Ethan Wozniak, Suzanne Heimbach, Sara Daleski, Lois Hanshaw, Karen O’Conor, Ray Bush, Elizabeth Fan, and Robert MacKay made key contributions to this report.
Heightened attention to federal deficits has increased pressure on IRS to reduce the tax gap--the difference between taxes owed and taxes paid on time--and better enforce taxpayer compliance. Resource limitations and concern over taxpayer burden, however, prevent IRS from auditing more than a small fraction of individual income tax returns filed. How IRS allocates these limited resources demands careful consideration. As requested, this report (1) describes how IRS allocates resources across individual taxpayer compliance enforcement programs and across types of taxpayers within each program; (2) estimates the direct revenue return on investment for the individual taxpayer enforcement programs and the extent of variation across those programs and across types of taxpayers; and (3) determines the potential for gains from shifting resources from lower-yielding programs and types of taxpayers to higher-yielding ones. To accomplish these objectives GAO analyzed IRS data on 2007 and 2008 tax returns, reviewed IRS documentation, and interviewed appropriate IRS officials. The Internal Revenue Service (IRS) spends most of its enforcement resources on examinations. Correspondence exams of individual tax returns, which target fewer and simpler compliance issues, are significantly less costly on average than the broader and more complex field exams. GAO estimated that the average cost (including overhead) of correspondence exams opened in 2007 and 2008 was $274, compared to an average of $2,278 for field exams. IRS spent almost 20 percent of the $1.6 billion per year that it devoted to exams on returns from taxpayers with positive income of at least $200,000, even though such returns accounted for only 3 percent of the 136 million individual returns filed per year. (Positive income, a measure that IRS uses to classify returns for exam planning purposes, disregards losses that may offset this income). GAO estimated that, for the 2 years of cases reviewed, correspondence exams were significantly more productive in terms of direct revenue produced per dollar of cost than field exams. Both types of exams of taxpayers with positive incomes of at least $200,000 were significantly more productive than exams of lower-income taxpayers. GAO demonstrated how these estimates could be used to inform resource allocation decisions. For example, a hypothetical shift of a small share of resources (about $124 million) from exams of tax returns in less productive groups shown in the figure to exams in the more productive groups could have increased direct revenue by $1 billion over the $5.5 billion per year IRS actually collected (as long as the average ratio of direct revenue to cost for each category of returns did not change). These gains would recur annually, relative to the revenue that IRS would collect if it did not change its resource allocation. This particular resource shift would not reduce exam coverage rates significantly and, therefore, should have little, if any, negative effect on voluntary compliance. GAO recommends that IRS review disparities in the ratios of direct revenue yield to costs across different enforcement programs and across different groups of cases and consider this evidence as a potential basis for adjusting its allocation of enforcement resources each year. IRS agreed with the recommendations.
Established under the Social Security Amendments of 1965, Medicare is a two-part program: (1) “hospital insurance,” or part A, which covers inpatient hospital services and skilled nursing facility, hospice, and home health care services, and (2) “supplementary medical insurance,” or part B, which covers physician and outpatient hospital services, diagnostic tests, and ambulance and other medical services and supplies. In fiscal year 1997, part A will have covered an estimated 38.1 million aged and disabled beneficiaries, including those with chronic kidney disease. Total outlays for parts A and B are estimated at $212 billion for fiscal year 1997. In Medicare’s fee-for-service program, which is used by almost 90 percent of the program’s beneficiaries, physicians, hospitals, and other providers submit claims for services rendered to Medicare beneficiaries. HCFA administers the fee-for-service program largely through claims processing contractors. Insurance companies—like Blue Cross and Blue Shield plans, Mutual of Omaha, and CIGNA—process and pay Medicare claims, which totaled an estimated 900 million in fiscal year 1997. As Medicare contractors, these companies use federal funds to pay health care providers and beneficiaries and are reimbursed for the administrative expenses incurred in performing the Medicare work. Over the years, HCFA has consolidated some of Medicare’s operations, and the number of contractors has fallen from a peak of about 130 to about 65 in 1997. Generally, intermediaries are the contractors that handle claims submitted by “institutional providers” (hospitals, skilled nursing facilities, hospices, and home health agencies); carriers generally handle claims submitted by physicians, laboratories, equipment suppliers, and other practitioners. HCFA has guarded against inappropriate payments largely through contractor-managed operations, leaving the intermediaries and carriers broad discretion over how to protect Medicare program dollars. As a result, contractors’ implementation of Medicare payment safeguard policies varies significantly. Medicare’s managed care program covers a growing number of beneficiaries—more than 5 million as of September 1997—who have chosen to enroll in a prepaid health plan rather than purchase medical services from individual providers. The managed care program, which is funded from both the part A and part B trust funds, consists mostly of risk contract HMOs that enrolled nearly 5 million Medicare beneficiaries as of September 1997. Medicare pays these HMOs a monthly amount, fixed in advance, for each beneficiary enrolled. In this sense, the HMO has a “risk” contract because regardless of what it spends for each enrollee’s care, the HMO assumes the financial risk of providing health care in return for the payments received. An HMO profits if its cost of providing services is lower than the predetermined payment but loses if its cost is higher than the payment. The Congress provided important new resources and tools to fight health care fraud and abuse when it enacted HIPAA and BBA. To address problems in traditional fee-for-service Medicare, various provisions require HCFA to change outmoded payment methods, largely by establishing new prospective payment systems and by imposing fee caps, reductions, and updates to contain unnecessary expenditures. Certain provisions offer the potential to improve claims reviews—mandating specific increases in reviews and providing HCFA new contracting authority to acquire technical expertise. Enactment of the legislation represents an important first step toward the realization of program integrity goals. As we have noted in previous testimony, the legislation process sets forth the broad concepts while the administering agencies implement the legislation through planning, design, and execution. In the case of HIPAA, now more than a year old, HCFA and the HHS Inspector General have been developing plans on many fronts, but actual implementation is just beginning. In the case of BBA, less than 3 months old, the “to-do” list is long. Three examples relating to both acts illustrate the situation. First, HIPAA, enacted over a year ago, grants HCFA the authority to use contractors other than the insurers serving as Medicare intermediaries and carriers to conduct medical and utilization review, audit cost reports, and carry out other program safeguard activities. The purpose is to enhance HCFA’s oversight of claims payment operations by increasing contractor accountability, enhancing data analysis capabilities, and avoiding potential contractor conflicts of interest. HCFA’s target date for awarding the first program safeguard contract is in fiscal year 1999, more than a year from now. HCFA officials are preparing for public comment a notice of proposed rulemaking that would ultimately govern the selection of contractors to perform safeguard functions, but they are not able to specify when the contract award rules will be final. experiences with database development, it could be several years before the system can be fully operational. Distinct from its predecessor system, the National Provider Data Bank, this data collection program is expected to maintain information on civil judgments, criminal convictions, licensing and certification actions on suppliers and providers, exclusions, and other adjudicated adverse actions—involving the collection of data from state and local governments. The program must also be self-supporting, requiring market research to assess the needs and preferences of potential users. Finally, because existing federal and state statutes and regulations may impede the collection and dissemination of the information required, new federal regulations may be necessary, requiring the publication of proposed rules, a 60-day period for receipt of public comments, and an indeterminate period for making the regulations final. Third, BBA requires the implementation of several prospective payment systems to replace cost-based reimbursement methods. Depending on their design, prospective payment systems can remove the incentive to provide services unnecessarily. For example, prospective payment for skilled nursing facilities (SNF) should make it more difficult to increase payments by manipulating Medicare’s billing rules for ancillary services provided to beneficiaries in these facilities, an issue often raised in our reports and testimonies. However, a considerable amount of work will be involved. Establishing rates that will enable efficient providers to furnish adequate services without overcompensating them will require (1) accounting for the varying needs of patients for routine and ancillary services and (2) collecting reliable cost and utilization data to compute the rates and the needed health status adjustment factors. Earlier this year in testimony before this Committee on prospective payment proposals, we suggested that HCFA use the results of audits of a projectable sample of SNF cost reports when setting base rates to avoid incorporating the inflated costs found in the HHS Inspector General’s reviews of SNF cost reports. We also discussed the need for systems to adequately monitor prospective payments to help ensure that providers do not skimp on services to increase profits at the expense of quality care. public comment, and issuing final regulations. For example, it took HCFA 4 years—from the time a task force was established in 1993—to issue proposed salary guideline regulations for rehabilitation therapy services. To meet the requirements of BBA, HCFA will have to develop, concurrently, separate prospective payment systems for services delivered through inpatient rehabilitation facilities, home health agencies, skilled nursing facilities, and hospital outpatient departments. Developing prospective payment systems, moreover, represents only a fraction of the design and implementation work that HIPAA and BBA require. Conducting demonstration projects and reporting to the Congress constitute another portion of work mandated by the legislation. Among the more challenging of BBA’s provisions to implement are those establishing the Medicare+Choice program, which expands beneficiaries’ private plan options to include preferred provider organizations (PPO), provider sponsored organizations (PSO), and private fee-for-service plans. It also makes medical savings accounts (MSA) available to a limited number of beneficiaries under a demonstration program. The reforms the Congress embodied in these provisions are major, helping Medicare adapt to and capitalize on changes in the health care market. However, each of these options will have to be carefully monitored to identify and correct vulnerabilities. Our observations of HCFA’s oversight of Medicare’s risk contract HMOs, which have been the chief alternative to traditional fee-for-service Medicare, raise concerns. In our 1997 High-Risk Series report, we noted that HCFA’s monitoring of HMOs has been historically weak. HCFA has allowed some plans with a history of abusive sales practices, delays in processing beneficiaries’ appeals of HMO decisions to deny coverage, and patterns of poor-quality care to receive little more than a slap on the wrist. We also noted that HCFA had done little to inform beneficiaries of HMO performance and did not publish available data on such satisfaction indicators as rapid disenrollment rates compared across Medicare HMOs within a given market. plan’s inpatient and outpatient services and the adequacy of the plan’s response to written complaints about poor-quality care. These and other mandates should help improve oversight. The act also requires HHS to disseminate to all beneficiaries within a market area consumer information on the area’s Medicare+Choice plans, including, for example, disenrollment rates, health outcomes, and compliance with program requirements. Collectively, these consumer information requirements enlist market forces to help improve HMO performance. We remain concerned that HCFA will have to be attentive to new issues raised by expanded choice for beneficiaries. The implementation challenge for HCFA will be to strike a judicious balance between encouraging plan growth and development and adequately protecting beneficiaries’ quality of care. For example, under BBA, requirements for minimum enrollment levels—aimed at achieving an adequate spreading of risk to ensure a plan’s financial solvency—can be waived for new Choice plans in their first 3 years of operation. In addition, the recent authorization of higher HMO rates in rural areas may well increase the total number of risk contract HMOs. If the number of Medicare managed care organizations grows, HCFA may not be equipped to make site visits at the current rate of every other year. Finally, all the Medicare+Choice plans, including PPOs, PSOs, and private fee-for-service plans, will have to submit new marketing materials for HHS approval; with an escalating workload, however, these materials could be approved without adequate scrutiny. Under the law, marketing materials are approved automatically if HHS does not disapprove them within 45 days of their submission to the Department. system for collecting payment and other information related to risk contract HMOs, but the MTS contract has been terminated. HCFA is in the process of consolidating its nine separate systems into one part A claims system and one part B claims system. While having a single system for each part should allow better claims editing, it would not provide all the benefits that had been expected from MTS, including the ability to ensure routinely, before payments are made, that an item or service billed to part A has not also been billed to part B and vice versa. Other anti-fraud-and-abuse software development discussed in our High-Risk report—namely, algorithms under development by the Los Alamos National Laboratory for generating prepayment claims screens and commercial off-the-shelf software controls being tested at one contractor—are years away from implementation nationwide. Aware of the need for agencywide coordination and planning to implement BBA’s multiple provisions, HCFA has established an infrastructure to track and monitor the tasks associated with BBA mandates. Staff organized into functional teams will be led by a project management team tasked with reporting to agency executives, including the HCFA Administrator. According to a HCFA official, the agency has plans to keep Department officials and the Congress routinely informed of the agency’s progress. With the enactment of HIPAA and BBA, the Congress has provided significant opportunities to strengthen several of Medicare’s areas of vulnerability. How HHS and HCFA will use the authority of HIPAA and BBA to improve its vigilance over Medicare benefit dollars remains to be seen. The outcome largely depends on how promptly and effectively HCFA implements the various provisions. HCFA’s past efforts to implement regulations, oversee Medicare managed care plans, and acquire a major information system have often been slow or ineffective. Now that many more requirements have been placed on HCFA, we are concerned that the promise of the new legislation to combat health care fraud and abuse could at best be delayed or not be realized at all without sustained efforts at implementation. Mr. Chairman, this concludes my statement. I will be happy to answer your questions. Medicare Automated Systems: Weaknesses in Managing Information Technology Hinder Fight Against Fraud and Abuse (GAO/T-AIMD-97-176, Sept. 29, 1997). Medicare Home Health Agencies: Certification Process Is Ineffective in Excluding Problem Agencies (GAO/T-HEHS-97-180, July 28, 1997). Medicare: Control Over Fraud and Abuse Remains Elusive (GAO/T-HEHS-97-165, June 26, 1997}. Medicare: Need to Hold Home Health Agencies More Accountable for Inappropriate Billings (GAO/HEHS-97-108, June 13, 1997). Medicare Transaction System: Success Depends Upon Correcting Critical Managerial and Technical Weaknesses (GAO/AIMD-97-78, May 16, 1997). Nursing Homes: Too Early to Assess New Efforts to Control Fraud and Abuse (GAO/T-HEHS-97-114, Apr. 16, 1997). Medicare Post-Acute Care: Cost Growth and Proposals to Manage It Through Prospective Payment and Other Controls (GAO/T-HEHS-97-106, Apr. 9, 1997). Medicaid Fraud and Abuse: Stronger Action Needed to Remove Excluded Providers From Federal Health Programs (GAO/HEHS-97-63, Mar. 31, 1997). High-Risk Series: Medicare (GAO/HR-97-10, Feb. 1997). Medicare: HCFA Should Release Data to Aid Consumers, Prompt Better HMO Performance (GAO/HEHS-97-23, Oct. 22, 1996). Medicare: Home Health Utilization Expands While Program Controls Deteriorate (GAO/HEHS-96-16, Mar. 27, 1996). Medicare Transaction System: Strengthened Management and Sound Development Approach Critical to Success (GAO/T-AIMD-96-12, Nov. 16, 1995). Medicare: Commercial Technology Could Save Billions Lost to Billing Abuse (GAO/AIMD-95-135, May 5, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed recent legislative efforts to address fraud and abuse in the Medicare program. GAO noted that: (1) both the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and the Balanced Budget Act of 1997 (BBA) directly address Medicare fraud and abuse and provide opportunities to improve program management; (2) both acts offer civil and criminal penalties; (3) they also introduce opportunities to deploy new program safeguards; (4) for example, on the fee-for-service side of the program, BBA introduces prospective payment methods for skilled nursing facility and home health services, in part to halt opportunists from overbilling Medicare; (5) these are among Medicare's fastest-growing components: from 1989 to 1996, spending for home health care and skilled nursing facility care averaged, respectively, a 33-percent and 22-percent annual rise; (6) HIPAA also ensures a stable source of funding for anti-fraud-and-abuse activities, authorizes the Health Care Financing Administration (HCFA) to contract for improved claims reviews, enhances law enforcement coordination, and calls for data collection improvements; (7) on the managed care side, BBA's Medicare+Choice program, which broadens beyond health maintenance organizations (HMO) the private health plans available to Medicare beneficiaries, includes several provisions addressing the marketing, enrollment, and quality of care issues raised in GAO's reports and those of the Inspector General; (8) as always, however, the success of any reform legislation is contingent on its implementation; (9) the Congress has provided the Department of Health and Human Resources (HHS) and HCFA, the Department's administrator of the Medicare program, with many new statutory requirements governing traditional fee-for-service Medicare; some require little effort to carry out, whereas others, such as prospective payment system development, will require extensive time and resources to implement effectively; (10) in addition, the Medicare+Choice program will add considerably to HCFA's private plan monitoring workload; (11) the project to modernize Medicare's claims processing systems, which are at the core of many fraud and abuse detection efforts, has recently been halted; (12) this brings into question the ability of HCFA and its contractors to perform expeditiously the data-intensive analyses needed to spot and counteract abusive billing schemes; and (13) HCFA agrees that the tasks associated with implementing HIPAA and BBA mandates are considerable and plans to report routinely to HHS officials and to the Congress on HCFA's progress implementing the legislation.
Since we designated the implementation and transformation of DHS as high risk in 2003, DHS has made progress addressing management challenges and senior department officials have demonstrated commitment and top leadership support for addressing the department’s management challenges. However, the department has significant work ahead to achieve positive outcomes in resolving high-risk issues. For example, DHS faces challenges in modernizing its financial systems, implementing acquisition management controls, and improving employee satisfaction survey results, among other things. As DHS continues to mature as an organization, it will be important for the department to continue to strengthen its management functions, since the effectiveness of these functions affects its ability to fulfill its homeland security and other missions. Financial management. DHS has made progress in addressing its financial management and internal controls weaknesses, but has been unable to obtain an unqualified audit opinion on its financial statements since the department’s creation and faces challenges in modernizing its financial management systems. DHS has, among other things, reduced the number of material weaknesses in internal controls from 18 in 2003 to 5 in fiscal year 2011; achieved its goal of receiving a qualified audit opinion on its fiscal year 2011 consolidated balance sheet and statement of custodial activity for the first time since the department’s creation; established a goal of obtaining an audit opinion on all of its fiscal year 2012 financial statements; and expanded the scope of the annual financial audit to the complete set of fiscal year 2012 financial statements, which DHS believes will help it to obtain an unqualified opinion for fiscal year 2013. However, DHS continues to face challenges in financial management. For example, DHS anticipates difficulties in providing its auditors transaction- level detail to support balances reported in its fiscal year 2012 financial statements in order to obtain an opinion on its financial statements. This is due to, among other things, components not retaining original acquisition documentation or enforcing policies related to recording purchases and making payments. DHS also anticipates its auditors issuing a disclaimer in their fiscal year 2012 report on internal controls over financial reporting due to material weaknesses in internal controls, such as lack of effective controls over the recording of financial transactions related to property, plant, and equipment. In addition, in December 2011, DHS reported that the Federal Emergency Management Agency (FEMA), U.S. Coast Guard (USCG), and U.S. Immigration and Customs Enforcement (ICE) have an essential business need to replace their financial management systems, but DHS has not fully developed its plans for upgrading existing or implementing new financial systems at these agencies. According to DHS’s June 2012 version of its Integrated Strategy for High Risk Management, the department plans to extend the useful life of FEMA’s current system by about 3 years, while FEMA proceeds with a new financial management system solution, and is in the process of identifying the specific approach, necessary resources, and time frames for upgrading existing or implementing new financial systems at USCG and ICE. Without sound processes, controls, and systems, DHS faces long-term challenges in obtaining and sustaining an unqualified opinion on both its financial statements and internal controls over financial reporting, and ensuring its financial management systems generate reliable, useful, timely information for day-to-day decision making. We currently have ongoing work related to DHS’s efforts to improve its financial reporting that we expect to report on in the spring of 2013. Acquisition management. DHS has made progress in the acquisition management area by enhancing the department’s ability to oversee major acquisition programs. For example: DHS has established eight Centers of Excellence for cost estimating, systems engineering, and other disciplines to bring together program managers, senior leadership staff, and subject matter experts to promote best practices, provide expert counsel, technical guidance, and acquisition management tools; and each DHS component has established a Component Acquisition Executive (CAE) to provide oversight and support to programs within the component’s portfolio. According to DHS, as of June 2012, 75 percent of the core CAE support positions were filled. In March 2012, DHS completed the development of a Procurement Staffing Model to determine optimal numbers of personnel to properly award and administer contracts. In June 2012, DHS reported that it is taking steps to implement the staffing model throughout headquarters and the components. DHS included a new initiative (strategic sourcing) in its December 2011 Integrated Strategy for High Risk Management to increase savings and improve acquisition efficiency by consolidating contracts departmentwide for the same kinds of products and services. The Office of Management and Budget’s Office of Federal Procurement Policy has cited DHS’s efforts among best practices for implementing federal strategic sourcing initiatives. Earlier this month, we reported that the department has implemented 42 strategically sourced efforts since the department’s inception. According to DHS data, the department’s spending through strategic sourcing contract vehicles has increased steadily from $1.8 billion in fiscal year 2008 to almost $3 billion in fiscal year 2011, representing about 20 percent of DHS’s procurement spending for that year. However, DHS continues to face significant challenges in managing its acquisitions. For example: Earlier this week, we reported that 68 of the 71 program offices we surveyed from January through March 2012 responded that they experienced funding instability, workforce shortfalls, and/or changes to their planned capabilities over the programs’ duration. We have previously reported that these challenges increase the likelihood acquisition programs will cost more and take longer to deliver capabilities than expected. Our recent review of DHS acquisition management also identified that while DHS’s acquisition policy reflects many key program management practices that could help mitigate risks and increase the chances for successful outcomes, it does not fully reflect several key portfolio management practices, such as allocating resources strategically. DHS plans to develop stronger portfolio management policies and processes, but until it does so, DHS programs are more likely to experience additional funding instability, which will increase the risk of further cost growth and schedule slips. We recommended that DHS take a number of actions to help mitigate the risk of poor acquisition outcomes and strengthen the department’s investment management activities. DHS concurred with all of our recommendations and noted actions it had taken or planned to address them. Human capital management. DHS has taken a number of actions to strengthen its human capital management. For example: DHS issued human capital-related plans, guidance, and tools to address its human capital challenges, including a Workforce Strategy for 2011-2016; a revised Workforce Planning Guide, issued in March 2011, to help the department plan for its workforce needs; and a Balanced Workforce Strategy tool, which some components have begun using to help achieve the appropriate mix of federal and contractor skills. The department implemented two programs to address senior leadership recruitment and hiring, as we reported in February 2012. While DHS’s senior leadership vacancy rate was as high as 25 percent in fiscal year 2006, it varied between 2006 and 2011 and declined overall to 10 percent at the end of fiscal year 2011. DHS developed outreach plans to appeal to veterans and other underrepresented groups. While these initiatives are promising, DHS continues to face challenges in human capital management. For example: As we reported in March 2012, based on our preliminary observations of DHS’s efforts to improve employee morale, federal surveys have consistently found that DHS employees are less satisfied with their jobs than the government-wide average. DHS has taken steps to identify where it has the most significant employee satisfaction problems and developed plans to address those problems, such as establishing a departmentwide Employee Engagement Executive Steering Committee, but has not yet improved employee satisfaction survey results. We plan to issue a final report on our findings later this month. As we reported in April 2012, changes in FEMA’s workforce, workload, and composition have created challenges in FEMA’s ability to meet the agency’s varied responsibilities and train its staff appropriately. For example, FEMA has not developed processes to systematically collect and analyze agencywide workforce and training data that could be used to better inform its decision making. We recommended that FEMA, among other things, identify long-term quantifiable mission-critical goals, establish lines of authority for agencywide workforce planning and training efforts, and develop systematic processes to collect and analyze workforce and training data. DHS concurred with our recommendations and reported actions underway to address them. Information technology management. DHS has made progress in strengthening its IT management, but the department has much more work to do to fully address its IT management weaknesses. Among other accomplishments, DHS has: strengthened its enterprise architecture; defined and begun to implement a vision for a tiered governance structure intended to improve program and portfolio management, as we reported in July 2012; established a formal IT Program Management Development Track and staffed Centers of Excellence with subject matter experts to assist major and non-major programs. Based on preliminary observations from our review of DHS’s major at-risk IT acquisitions we are performing for the committee, these improvements may be having a positive effect. Specifically, as of March 2012, approximately two-thirds of the department’s major IT investments we reviewed (47 of 68) were meeting current cost and schedule commitments (i.e. goals). DHS has made progress, but the department has much more work to do to fully address its IT management weaknesses. For example, the department needs to: finalize the policies and procedures associated with its new tiered governance structure and continue to implement this structure, as we recommended in our July 2012 report; continue to implement its IT human capital plan, which DHS believed would take 18 months to fully implement as of June 2012; and continue its efforts to enhance IT security by, among other things, effectively addressing material weaknesses in financial systems security, developing a plan to track and promptly respond to known vulnerabilities, and implementing key security controls and activities. Management integration. DHS has made progress in integrating its individual management functions across the department and its component agencies. For example, DHS has put into place common policies, procedures, and systems within individual management functions, such as human capital, that help to integrate its component agencies, as we reported in September 2011. To strengthen this effort, in May 2012, the Secretary of Homeland Security modified the delegations of authority between the Management Directorate and their counterparts at the component level. According to DHS, this action will provide increased standardization of operating guidelines, policies, structures, and oversight of programs. Additionally, DHS has taken steps to standardize key data elements for the management areas across the department to enhance its decision-making. For example, in April 2012, the Under Secretary for Management appointed an executive steering committee and tasked this committee with creating a “Data Mart” to integrate data from disparate sources and allow the dissemination of timely and reliable information by March 2013. Further, consistent with our prior recommendations, DHS has implemented mechanisms to promote accountability for management integration among department and component management chiefs by, among other things, having the department chiefs develop written objectives that explicitly reflect priorities and milestones for that management function. Although these actions are important, DHS needs to continue to demonstrate sustainable progress in integrating its management functions within and across the department and its components and take additional actions to further and more effectively integrate the department. For example, DHS recognizes the need to better integrate its lines of business. The Integrated Investment Life Cycle Model (IILCM), which the department is establishing to manage investments across the department’s components and management functions, is an attempt at doing that. DHS identified the IILCM as one of its most significant management integration initiatives in January 2011. However, the June 2012 update reported that this initiative is in its early planning stages, will be phased in over multiple budget cycles, and requires additional resources to fully operationalize. In September 2012, DHS reported that it has developed draft policy and procedural guidance to support implementation of the IILCM and now plans to begin using aspects of this new approach to develop portions of the department’s fiscal years 2015 through 2019 budget. DHS strategy for addressing GAO’s high-risk designation. In January 2011, DHS issued an agencywide management integration strategy—the Integrated Strategy for High Risk Management—as we recommended in our March 2005 report on DHS’s management integration efforts. DHS’s most recent version of the strategy, issued in June 2012, greatly improved upon prior versions and addressed feedback we previously provided by, for example, identifying key measures and progress ratings for the 18 initiatives included in the strategy and the 31 outcomes.believe the June 2012 strategy, if implemented and sustained, provides a path for DHS to address our high-risk designation. DHS can further strengthen or clarify its Integrated Strategy for High Risk Management to better enable DHS, Congress, and GAO to assess the department’s progress in implementing its management initiatives by, among other things: determining the resource needs for all of the corrective actions in the strategy; communicating to senior leadership critical resource gaps across all initiatives; and identifying program and project risks in a supporting risk mitigation plan for all initiatives. Going forward, DHS needs to continue implementing its Integrated Strategy for High Risk Management and show measurable, sustainable progress in implementing its key management initiatives and corrective actions and achieving outcomes. We will continue to monitor, assess, and provide feedback on DHS’s implementation and transformation efforts through our ongoing and planned work, including the 2013 high-risk update that we expect to issue in January 2013. Chairman King, Ranking Member Thompson, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For questions about this statement, please contact David C. Maurer at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Maria Strudwick, Assistant Director; Victoria Miller, Analyst-in- Charge; and Chloe Brown. Other contributors include: David Alexander, Michael LaForge, Tom Lombardi, Anjalique Lawrence, Gary Mountjoy, Sabine Paul, and Katherine Trimble. Key contributors for the previous work that this testimony is based on are listed within each individual product. Homeland Security: DHS Requires More Disciplined Investment Management to Help Meet Mission Needs. GAO-12-833. Washington, D.C.: September 18, 2012. Department of Homeland Security: Oversight and Coordination of Research and Development Should Be Strengthened. GAO-12-837. Washington, D.C.: September 12, 2012. Homeland Security: DHS Has Enhanced Procurement Oversight Efforts, but Needs to Update Guidance. GAO-12-947. Washington, D.C.: September 10, 2012. Information Technology: DHS Needs to Further Define and Implement Its New Governance Process. GAO-12-818. Washington, D.C.: July 25, 2012. Federal Emergency Management Agency: Workforce Planning and Training Could Be Enhanced by Incorporating Strategic Management Principles. GAO-12-487. Washington, D.C.: April 26, 2012. Department of Homeland Security: Preliminary Observations on DHS’s Efforts to Improve Employee Morale. GAO-12-509T. Washington, D.C.: March 22, 2012. Department of Homeland Security: Continued Progress Made Improving and Integrating Management Areas, but More Work Remains. GAO-12-365T. Washington, D.C.: March 1, 2012. Information Technology: Departments of Defense and Energy Need to Address Potentially Duplicative Investments. GAO-12-241. Washington D.C.: February 17, 2012. DHS Human Capital: Senior Leadership Vacancy Rates Generally Declined, but Components’ Rates Varied. GAO-12-264. Washington, D.C.: February 10, 2012. Department of Homeland Security: Additional Actions Needed to Strengthen Strategic Planning and Management Functions. GAO-12-382T. Washington D.C.: February 3, 2012. Department of Homeland Security: Progress Made and Work Remaining in Implementing Homeland Security Missions 10 Years after 9/11. GAO-11-881. Washington D.C.: September 7, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Information Security: Federal Agencies Have Taken Steps to Secure Wireless Networks, but Further Actions Can Mitigate Risk. GAO-11-43. Washington, D.C.: November 30, 2010. Department of Homeland Security: Assessments of Selected Complex Acquisitions. GAO-10-588SP. Washington, D.C.: June 30, 2010. Information Security: Agencies Need to Implement Federal Desktop Core Configuration Requirements. GAO-10-202. Washington, D.C.: March 12, 2010. Financial Management Systems: DHS Faces Challenges to Successfully Consolidating Its Existing Disparate Systems. GAO-10-76. Washington, D.C.: December 4, 2009. Department of Homeland Security: Actions Taken Toward Management Integration, but a Comprehensive Strategy Is Still Needed. GAO-10-131. Washington, D.C.: November 20, 2009. Homeland Security: Despite Progress, DHS Continues to Be Challenged in Managing Its Multi-Billion Dollar Annual Investment in Large-Scale Information Technology Systems. GAO-09-1002T. Washington, D.C.: September 15, 2009. Department of Homeland Security: Billions Invested in Major Programs Lack Appropriate Oversight. GAO-09-29. Washington, D.C.: November 18, 2008. Department of Homeland Security: Better Planning and Assessment Needed to Improve Outcomes for Complex Service Acquisitions. GAO-08-263. Washington, D.C.: April 22, 2008. Homeland Security: Departmentwide Integrated Financial Management Systems Remain a Challenge. GAO-07-536. Washington, D.C.: June 21, 2007. Information Technology Investment Management: A Framework for Assessing and Improving Process Maturity, version 1.1. GAO-04-394G. Washington, D.C.: March 2004. High-Risk Series: Strategic Human Capital Management. GAO-03-120. Washington, D.C.: January 2003. Determining Performance and Accountability Challenges and High Risks. GAO-01-159SP. Washington, D.C.: November 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the Department of Homeland Security's (DHS) efforts to strengthen and integrate its management functions. DHS now has more than 200,000 employees and an annual budget of almost $60 billion, and its transformation is critical to achieving its homeland security and other missions. Since 2003, GAO has designated the implementation and transformation of DHS as high risk because DHS had to combine 22 agencies--several with major management challenges--into one department, and failure to effectively address DHS's management and mission risks could have serious consequences for our national and economic security. This high-risk area includes challenges in strengthening DHS's management functions--financial management, acquisition management, human capital, and information technology (IT)--the effect of those challenges on DHS's mission implementation, and challenges in integrating management functions within and across the department and its components. In November 2000, we published our criteria for removing areas from the high-risk list. This high-risk area includes challenges in strengthening DHS's management functions--financial management, acquisition management, human capital, and information technology (IT)--the effect of those challenges on DHS's mission implementation, and challenges in integrating management functions within and across the department and its components. Specifically, agencies must have (1) a demonstrated strong commitment and top leadership support to address the risks; (2) the capacity (that is, the people and other resources) to resolve the risks; (3) a corrective action plan that identifies the root causes, identifies effective solutions, and provides for substantially completing corrective measures in the near term, including but not limited to steps necessary to implement solutions we recommended; (4) a program instituted to monitor and independently validate the effectiveness and sustainability of corrective measures; and (5) the ability to demonstrate progress in implementing corrective measures. On the basis of our prior work, in a September 2010 letter to DHS, we identified, and DHS agreed to achieve, 31 actions and outcomes that are critical to addressing the challenges within the department's management areas and in integrating those functions across the department to address the high-risk designation. These key actions and outcomes include, among others, obtaining and then sustaining unqualified audit opinions for at least 2 consecutive years on the departmentwide financial statements; validating required acquisition documents in accordance with a department-approved, knowledge-based acquisition process; and demonstrating measurable progress in implementing its IT human capital plan and accomplishing defined outcomes. In January 2011, DHS issued its initial Integrated Strategy for High Risk Management, which included key management initiatives (e.g., financial management controls, IT program governance, and procurement staffing model) to address challenges and the outcomes we identified for each management area. DHS provided updates of its progress in implementing these initiatives in later versions of the strategy--June 2011, December 2011, and June 2012. Achieving and sustaining progress in these management areas would demonstrate the department's ability and ongoing commitment to addressing our five criteria for removing issues from the high-risk list. As requested, this testimony will discuss our observations, based on prior and ongoing work, on DHS's progress in achieving outcomes critical to addressing its high-risk designation for the implementation and transformation of the department. Since we designated the implementation and transformation of DHS as high risk in 2003, DHS has made progress addressing management challenges and senior department officials have demonstrated commitment and top leadership support for addressing the department's management challenges. However, the department has significant work ahead to achieve positive outcomes in resolving high-risk issues. For example, DHS faces challenges in modernizing its financial systems, implementing acquisition management controls, and improving employee satisfaction survey results, among other things. As DHS continues to mature as an organization, it will be important for the department to continue to strengthen its management functions, since the effectiveness of these functions affects its ability to fulfill its homeland security and other missions.
We provided a draft of this report to State, DOD, and USAID for review. None of the agencies provided formal comments. However, State provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretaries of State and Defense, and to the USAID Administrator. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-8980 or courtsm@gao.gov, or the individual(s) listed at the end of each enclosure. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix VII. Background The Department of State’s (State) Bureau of Diplomatic Security (Diplomatic Security) manages much of the security-related funding within State’s Diplomatic and Consular Programs budget, the largest category of which comes from the Worldwide Security Protection account. Salaries for Diplomatic Security personnel are managed separately by State’s Bureau of Budget and Planning. Issue Total funding for Diplomatic Security operations was almost $4.8 billion in fiscal year 2016. Total funding for Diplomatic Security includes its bureau managed funds as well as other funding—such as personnel salaries— managed by other bureaus and offices but necessary for Diplomatic Security operations. Diplomatic Security’s bureau managed funds ($3.3 billion in fiscal year 2016) are composed of funds received through annual appropriations, fees collected through visa processing, reimbursements from other agencies, and appropriated funds carried over from prior fiscal years. These funds support regular, ongoing operations and Overseas Contingency Operations (OCO) for temporary, war-related operations. State directed an additional $1.5 billion to Diplomatic Security and its employees in 2016, through other bureaus and offices. Key Findings In fiscal year 2016, Diplomatic Security’s bureau managed funds totaled approximately $3.3 billion. Bureau managed funds have increased in response to multiple security incidents since the 1998 bombings of the U.S. embassies in Kenya and Tanzania. (Fig. 1 shows that Diplomatic Security’s bureau managed funds had tremendous growth from 1998 through 2016 in both real and nominal dollars.) F Figure 1: Historical Trend in Department of State Bureau of Diplomatic Security Managed Funds, 1998-2016 From 1995 to 1998, Diplomatic Security’s bureau managed funds averaged about $173 million annually. After the 1998 bombings in Africa, bureau managed funds grew to $784 million in 1999 as Congress provided Diplomatic Security with emergency supplemental funding to address security vulnerabilities at posts worldwide. By fiscal year 2009, bureau managed funds had grown to about $2.0 billion, largely due to new security procedures put in place after 1998 as well as the need to provide security for diplomats in the conflict zones of Iraq and Afghanistan. Bureau managed funds increased in 2010 to $2.7 billion and in 2012 to $3.3 billion, as the U.S. military began to withdraw from Iraq and Diplomatic Security assumed many of the protective and security functions previously provided by the U.S. military in that country. Congress appropriated less funding in 2013 to the Worldwide Security Protection account because, according to Diplomatic Security, appropriated funds were carried over from prior years. The subsequent increases in funding for that account in 2014 through 2016 followed the 2012 attack in Benghazi, Libya. Since 2012, OCO supplemental funding has made up 34-62 percent of Diplomatic Security’s bureau managed funds. For example, in fiscal year 2016, OCO funding totaled over $2.0 billion—or 62 percent—of bureau managed funds for that year. According to a bureau official, State’s OCO funding was intended to be temporary funding to support operations in Iraq, Afghanistan, and Pakistan but continues to exist, given the security situation in those countries, and has expanded beyond those three countries. Some State officials are concerned that if OCO is discontinued, State would not have sufficient funding to provide necessary security services. For fiscal year 2018, the administration is requesting less OCO funding than the final appropriated amount for fiscal year 2017. Funding for Diplomatic Security operations totaled almost $4.8 billion in fiscal year 2016. This amount includes both bureau managed funds— which were almost $3.3 billion—and other funding directed to Diplomatic Security and its employees but managed by other bureaus and offices within State (personnel salaries, Antiterrorism Assistance funding, guard services funding, and fraud prevention and detection fees), which totaled almost $1.5 billion. For example, State’s Bureau of Budget and Planning manages the salaries of Diplomatic Security personnel. Funding for Diplomatic Security personnel increased from $12 million in 2000 to $419 million in 2016. In addition, State allocates funding to its Bureau of Overseas Buildings Operations for security construction at overseas facilities. Point of Contact For more information, contact: Michael J. Courts, (202) 512- 8980, courtsm@gao.gov 1. What impact has Diplomatic Security’s increased funding had on its ability to carry out its mission? Are current funding levels sufficient? 2. What are State’s plans for utilizing future Diplomatic Security funding? Will there be additional carryover funds in future years, as in 2013? Background The Department of State’s (State) Bureau of Diplomatic Security (Diplomatic Security), which is responsible for the protection of State’s people, property, and information, relies on a broad workforce to carry out its mission and activities. Its workforce includes direct-hire personnel, military support, and contractors. Posts also engage locally employed staff. Issue Over the last 2 decades, Diplomatic Security’s mission and activities have expanded in response to a number of security incidents, which has led to a dramatic increase in the size of its workforce. The growth in its responsibilities overseas began with the 1998 attacks in Africa and continued with the U.S. policy of maintaining a diplomatic presence in war zones such as Afghanistan and Iraq and other increasingly hostile environments. In addition, the September 11, 2001, terrorist attacks underscored the importance of enhancing domestic security, including Diplomatic Security’s investigative capacity, technical programs, and counterintelligence work. This sustained and at times rapid growth has taxed Diplomatic Security’s ability to staff positions with the appropriate level of experience and skills. Key Findings Diplomatic Security’s workforce—numbering over 51,000 direct-hire, other U.S. government, and contract personnel as of May 2017—has experienced continued growth in almost all staffing categories. We previously reported in 2009 that Diplomatic Security’s direct-hire work force doubled from 1998 to 2008. Since then, it has increased by another 36 percent to 3,488 personnel in 2017. If State’s current hiring freeze is lifted, Diplomatic Security officials told us that they plan to hire an additional 384 special agents in 2017 through 2018. The number of other U.S. government personnel reporting to Diplomatic Security increased by 60 percent, driven largely by the expansion of the Marine Security Guard program after the 2012 Benghazi attacks. Diplomatic Security increased its contracted and support staff by 22 percent. (Table 1 provides information on the increases in Diplomatic Security staff from 2008 through 2017; see app. IV for further staffing details.) In response to a Benghazi Accountability Review Board recommendation, State established a panel to reexamine Diplomatic Security’s organization and management. In 2013, the panel reported that, in part, Diplomatic Security had become more focused on its law enforcement and personnel protection functions. This was not surprising, according to the panel, given that Diplomatic Security provided security in two war zones and numerous other high-threat posts. Simultaneously, Diplomatic Security had experienced an increased demand on its domestic criminal investigative and dignitary protection programs. Nonetheless, the panel noted that Diplomatic Security’s primary mission is “to provide a secure environment for the conduct of U.S. foreign policy” and stated that Diplomatic Security should reflect this priority in its allocation of manpower and other resources. For example, the panel recommended that Diplomatic Security review personnel allocations both domestically and abroad. As of June 2017, Diplomatic Security had completed an initial classified review of its staffing and begun a follow- on study to (1) determine how Diplomatic Security has distributed its staff relative to its priorities; and (2) develop a methodology to assess the quantity, mix, and distribution of Diplomatic Security staff worldwide. According to Diplomatic Security, the second study is expected to result in two tools that Diplomatic Security can use for evaluating its staffing levels: one for domestic staffing and one for overseas staffing. In fiscal year 2010, we reported that 34 percent of Diplomatic Security’s positions were filled with officers below the position’s grade. In 2013, the organization and management panel noted that many Diplomatic Security regional director positions were filled by officers holding ranks below the levels established for that position (not including agents posted to Baghdad, Iraq). The panel recommended that Diplomatic Security prioritize filling these positions with at-grade personnel. While State concurred, as of June 2017, it had not identified any new, concrete actions for implementing this recommendation. Instead, State noted that it “will continue to make every effort to place at-grade, experienced, and highly qualified individuals into these positions.” As of December 2016, Diplomatic Security had 422 staffed language- designated positions (LDP), of which 304—or 72 percent—were filled with special agents who met the language requirement. This is an improvement since 2009, when we reported that only 47 percent of Diplomatic Security special agents at LDPs met the requirement. Officials cited two reasons for this increase in compliance: (1) greater agency emphasis on the need for agents to have language skills following the 2012 Benghazi attacks and (2) increased emphasis on speaking rather than reading skills. As a result, Diplomatic Security has an increased number of “asymmetrical” language requirements, where the speaking-level requirement is higher than the reading-level requirement. Diplomatic Security also adopted the “Alert” language training program, which provides special agents with speaking skills relevant to their technical work, particularly for languages spoken at certain high-threat posts. State officials told us that agents can become proficient in 10 weeks using this program, versus 30 weeks typically required for traditional methods. Michael J. Courts, (202) 512- 8980, courtsm@gao.gov ensure that it has the appropriate quantity, mix, and distribution of staff to address its overseas and domestic responsibilities? 2. What steps has Diplomatic Security taken to ensure that its positions are filled with appropriately experienced staff? 3. What is State doing to further close the gaps in Diplomatic Security’s LDPs? Background Responsibility for the security of the Department of State’s (State) diplomatic facilities falls principally on State’s Bureaus of Overseas Buildings Operations (OBO) and Diplomatic Security (Diplomatic Security). OBO is responsible for the design, construction, acquisition, maintenance, and sale of U.S. diplomatic property abroad. Diplomatic Security is responsible for establishing security and protective procedures at posts and developing and implementing the physical security programs. Maintaining the physical security of U.S. diplomatic facilities is a critical component of ensuring the safety of U.S. personnel, property, and information. According to OBO, State maintains approximately 1,600 work facilities at 275 diplomatic posts worldwide under chief-of-mission authority. In addition, State has a limited number of temporary work facilities, mostly in dangerous locations such as Afghanistan. All facilities at a post are expected to meet physical security standards set by the Overseas Security Policy Board. In fiscal years 2009 through 2016, State allocated about $11.1 billion to the construction of new, secure facilities and physical security upgrades to existing and acquired facilities. While Diplomatic Security has a few small programs to provide physical security upgrades to facilities abroad, OBO managed most of the allocated funds. Key Findings Following the 1998 attacks on U.S. embassies in Kenya and Tanzania, State determined that diplomatic facilities in over 180 posts—more than half of U.S. overseas missions—needed to be replaced to meet security standards. In 1999, State began a new embassy construction program, administered by OBO, to replace these posts. To expedite the delivery of new, secure compounds, OBO adopted a standard embassy design (SED) approach. However, some stakeholders raised concerns about the aesthetics, quality, location, and functionality of those facilities. For example, the 10-acre lot specified by the SED sometimes required situating an embassy far from urban centers, where foreign government offices and other embassies are located. In response to these concerns, State established the “Excellence” approach in 2011. (See fig. 3 for a picture of an embassy built under SED and a rendering of a consulate to be delivered under the Excellence approach.) OBO’s changes under the Excellence approach focus on producing more innovative, functional, and sustainable embassies that are just as secure as those built using the SED. However, some stakeholders have raised concerns that the new approach may result in embassies that take longer and cost more to build. This would delay getting U.S. personnel into facilities that meet current security standards. In 2017, we reported that, while the Excellence approach may result in improvements, it carries increased risk to cost and schedule—including up to 24 additional months to develop designs. While OBO is attempting to manage this risk, it does not have performance measures specific to the Excellence goals and, therefore, cannot fully assess the merits of the new approach. We made four recommendations to strengthen performance measures and reporting, monitoring mechanisms, and data systems. While State concurred with these recommendations, they remain open. When facilities do not or cannot meet certain security standards, State works to mitigate identified vulnerabilities through various construction programs and its waivers and exceptions process. However, in 2014, we reported that the waivers and exceptions process had weaknesses. Of the 43 facilities we reviewed, none met all applicable security standards and therefore required waivers, exceptions, or both. However, we found that neither posts nor headquarters systematically tracked the waivers and exceptions and that State had no process to reevaluate waivers and exceptions when the threat or risk changes. Furthermore, posts did not always request required waivers and exceptions or consistently take required mitigation steps. We concluded that with such deficiencies, State cannot be assured it has all the information needed to mitigate facility vulnerabilities. We made 13 recommendations for State to address gaps in its security-related activities, standards, and policies. State generally agreed with our recommendations and, as of June 2017, had addressed five of them. Future State construction in dangerous posts—such as Kabul, Afghanistan—will likely entail the continued use of temporary office or residential facilities, especially in conflict areas. However, in 2015, we found that in Kabul—without security standards or other guidance to guide temporary facility construction in conflict environments—State inconsistently applied alternative security measures that resulted in insufficient and different levels of security for temporary offices and housing as well as increased costs and extended schedules. We concluded that without temporary facility security standards or guidance, future construction in conflict environments could encounter similar problems. We recommended that State consider establishing security standards or guidance for temporary facilities in conflict zones. State partially concurred and subsequently reported that it was developing additional guidance relating to physical security systems such as Hardened Alternative Trailer Systems, surface-mounted, antiram barriers, and anticlimb wall toppings. As of May 2017, State was continuing to address this recommendation. Michael J. Courts, (202) 512- 8980, courtsm@gao.gov schedules associated with the Excellence approach to building new embassies? 2. To what extent do State’s facilities have or require waivers and exceptions to security standards? What steps has State taken to address weaknesses in its waivers and exceptions program? 3. How extensively does State rely on temporary facilities that have been in place for extended periods of time? What progress has State made in creating additional guidance relating to temporary facilities? Background The Secretary of State, in consultation with the heads of other federal agencies, is responsible for protecting U.S. government personnel on official duty abroad, along with their accompanying dependents. At overseas posts, the Department of State’s (State) Bureaus of Diplomatic Security (Diplomatic Security)—represented by a Regional Security Officer (RSO)—and Overseas Buildings Operations share responsibility for the security of residences and other soft targets overseas. More than 25,000 U.S. diplomatic personnel live overseas with their families in an environment that presents myriad security threats and challenges. While State has taken measures to enhance security at its embassies and consulates since the 1998 East Africa embassy bombings, these same actions have given rise to concerns that would-be attackers may shift their focus to what they perceive as more accessible targets, such as diplomatic residences, schools, and other places frequented by U.S. personnel and their families. For example, a 2014 posting on a jihadist website called for attacks on American and other international schools in the Middle East. (See fig. 4 for examples of diplomatic residences.) Key Findings State acquires housing for overseas personnel by leasing, purchasing, or constructing various types of residences, each of which is subject to a set of security standards. State assesses risks to residences using a range of activities—including a periodic security survey to identify and address vulnerabilities. In fiscal years 2010 through 2016, State allocated about $175 million for residential security upgrades. However, in 2014, we found that State did not complete all residential surveys as required, thereby limiting its ability to address vulnerabilities. In addition, we reviewed 68 overseas diplomatic residences and found that 38 did not meet all of the applicable standards, potentially placing their occupants at risk. In instances when a residence does not and cannot meet applicable security standards, posts are required to either seek other residences or request exceptions, which identify steps to mitigate vulnerabilities. However, we found that Diplomatic Security had an exception on file for only 1 of the 38 residences that did not meet all standards. We concluded that without documenting the necessary exceptions, State lacked a complete picture of security vulnerabilities at residences and information that would enable it to make better risk management decisions. In addition, more rigorous security standards that went into effect in July 2014 would likely increase posts’ need for exceptions and lead to costs for upgrades. We made four recommendations regarding the management of risks to residences. State concurred with all four and, as of May 2017, had addressed one. (Fig. 5 portrays key security standards at a notional residence.) State has taken a variety of actions to manage risks to schools and other soft targets. These actions fall into three main categories: (1) funding security upgrades at K-12 schools with enrolled U.S. government dependents and off-compound employee association facilities, (2) sharing threat information and providing advice for mitigating threats at schools and other soft targets, and (3) conducting security surveys to identify and manage risks to schools and other soft targets. However, RSOs at most of the posts we reviewed in 2015 were unaware of some guidance and tools for securing these facilities—such as a booklet and compact disc entitled “Security Guide for International Schools” aimed at assisting international schools in designing and implementing a security program. As a result, we concluded that RSOs may not have been taking full advantage of State’s programs and resources for managing risks at soft targets. We recommended that State take steps to ensure that RSOs are aware of existing guidance and tools regarding the security of soft targets. In response, State issued a cable to all diplomatic and consular posts updating policies and procedures for State's Soft Targets Security Upgrade Program for overseas schools and department-chartered employee associations, thereby distributing important information to security personnel who were previously unaware of available guidance and information. Michael J. Courts, (202) 512- 8980, courtsm@gao.gov standards at overseas residences? Have the standards implemented in July 2014 affected the number of waivers and exceptions requested? 2. What steps has State taken to ensure that posts conduct residential physical security surveys and request security exceptions, when needed, in a timely manner? 3. To what extent has State adapted its Soft Targets Security Upgrade Program in light of recent public terrorist attacks? Background To help safeguard and prepare U.S. personnel to live and work in some of the most dangerous overseas locations, the Department of State’s (State) Bureau of Diplomatic Security (Diplomatic Security) provides training on personal security skills necessary for recognizing, avoiding, and responding to potential terrorism and other threat situations. Diplomatic Security also provides refresher briefings on certain topics, as well as cyber and technical security training. To consolidate the hands-on training that Diplomatic Security provides, State is constructing a training center in Fort Pickett, Virginia, which it expects will be completed in 2019. Issue State has a robust security awareness training program provided by Diplomatic Security. For example, State requires specified U.S. personnel traveling for less than 45 days in a calendar year to certain posts to complete its online High Threat Security Overseas Seminar (HTSOS). If specified U.S. personnel are traveling for 45 days or more in a calendar year, State requires that they complete the 5-day Foreign Affairs Counter Threat (FACT) training before departure. Diplomatic Security designed the FACT course to address the dangers that U.S. personnel might face in a number of high-threat, high-risk locations overseas. The course provides hands-on instruction in topics such as detection of surveillance, familiarization with firearms, and awareness of improvised explosive devices (see fig. 6 for examples of other FACT training topics). Key Findings State’s oversight of compliance with the FACT training requirement has weaknesses that limit its ability to ensure that U.S. personnel are adequately prepared for work in high-threat environments. We reported in 2011 and 2014 that State did not have the ability to systematically identify which people required to take the course had not taken it. We made several recommendations to State to improve its management oversight of compliance with mandatory FACT training. These included four recommendations for State to update its policy guidance to reflect changes made to the FACT training requirement in June 2013 (State had doubled the number of countries for which it required FACT training) and to provide clear information on which personnel are required to take FACT training. State concurred with the recommendations and took steps to address them. However, our recommendation that State monitor or evaluate overall levels of compliance with the FACT training requirement remains open. In May 2015, State officials said they were developing a plan to utilize various electronic systems to monitor overall levels of compliance with the FACT training requirement. As of June 2017, State reported that it continues to work on this issue. This lack of oversight is particularly concerning given the significant increase in the number of students taking Diplomatic Security-provided FACT training, from 912 in fiscal year 2006 to 4,482 in fiscal year 2016 (see fig. 7). In addition, in July 2014, State expanded the FACT training requirement to apply to all posts (not just those in high-threat, high-risk locations) by 2019. The gaps we have previously identified in State oversight may increase the risk that personnel do not complete FACT training, potentially placing their own and others’ safety in jeopardy. We reported in 2016 that weaknesses exist in State’s guidance on and management oversight of refresher briefings related to transportation security, potentially putting U.S. personnel overseas at greater risk. We found that personnel had difficultly remembering key details covered in new arrival briefings or described the one-time briefings as inadequate. We found that State lacked a clear requirement for Diplomatic Security to provide and track compliance with periodic refresher briefings that could help reinforce information covered in new arrival briefings. In part, this may result from State guidance lacking clarity and comprehensiveness on this matter. Specifically, its guidance states that regional security officers must conduct refresher briefings “periodically” at “certain posts where personnel live under hostile intelligence or terrorist threats for long periods” but does not define “periodically” or “long periods.” Further, according to Diplomatic Security officials, there is no requirement for affirming that post personnel have received refresher briefings. We recommended that State clarify existing guidance on refresher briefings, such as by delineating how often briefings should be provided at posts facing different types and levels of threats, which personnel should receive them, and how their completion should be documented. Diplomatic Security headquarters officials stated that most violations of post travel policies are due to personnel forgetting the information conveyed in new arrival briefings. Without effective reinforcement of the information that is covered in new arrival briefings, State cannot ensure that U.S. personnel and their families overseas have the knowledge they need to protect themselves from transportation-related security risks. Michael J. Courts, (202) 512-8980, courtsm@gao.gov compliance with all applicable security training requirements, including mandatory HTSOS and FACT training? 2. Does State have the capacity to train the number of U.S. personnel required to take Diplomatic Security-provided FACT training? 3. What steps is State taking to reinforce information covered in new arrival briefings with U.S. personnel and their families? Background The Department of State’s (State) Bureau of Diplomatic Security (Diplomatic Security) is responsible for ensuring that overseas post personnel and their family members are prepared for crisis situations and evacuations. Issue From October 2012 to September 2016, in response to various threats, such as terrorism, civil unrest, and natural disasters, State evacuated staff and family members from 23 overseas posts. During this period, several posts—such as Embassy Bujumbura in Burundi and Consulate Adana in Turkey—evacuated post staff or family members on more than one occasion. Overseas posts undergoing evacuations generally experience authorized departure or ordered departure of specific post staff or family members before leading to suspended operations. To help mitigate risks, State requires posts to create Emergency Action Plans (EAP), practice security drills and, if an evacuation is needed, review the event in order to learn from the experience. Key Findings State requires every post to update its EAP on an annual basis. EAPs contain information to assist overseas posts in responding to emergencies, such as checklists of response procedures and decision points to help determine when to evacuate post staff or family members. In 2017, we found that, from fiscal years 2013 through 2016, a quarter of overseas posts, on average, were late completing required annual EAP updates. While the completion rate improved from 46 percent to 92 percent of posts completing updates on time in fiscal years 2013 and 2016, respectively, our review of a nongeneralizable, judgmental sample of EAPs from 20 posts that had been approved by Diplomatic Security showed that only 2 of 20 had updated all key EAP sections. We also found that EAPs are viewed as lengthy and cumbersome documents that are not readily usable in emergency situations, as required by State policy. We recommended that State take several actions to improve their EAPs, such as developing a procedure to ensure that overseas posts complete comprehensive, annual EAP updates on time; develop a monitoring and tracking process to ensure EAP updates are reviewed; and make the EAP more readily usable during emergency situations. State agreed with all of our recommendations and reported that it has started to address them. For example, State is developing a redesigned EAP that will minimize redundancy, group content according to posts’ planning and response needs, and make the EAP better organized and more user-friendly. Posts are required to conduct nine types of drills each fiscal year to prepare for crises and evacuations. In 2017, we found that, on average for fiscal years 2013 through 2016, posts worldwide reported completing 52 percent of required annual drills; posts rated high or critical for political violence or terrorism reported completing 44 percent of these drills. Overall, less than 4 percent of posts reported completing all required drills during fiscal years 2013 through 2016. As shown in figure 8 below, 78 percent of posts reported completing duck-and-cover drills, but only 36 percent of posts reported completing evacuation training drills. We recommended that State improve the completion and reporting of required drills. State concurred and is updating the system it uses to report drills. After an authorized or ordered departure has terminated, State’s Foreign Affairs Handbook requires post staff to transmit an after-action report listing any lessons learned from the experience to State headquarters. In 2017, we found that, during fiscal years 2013 through 2016, there were 31 evacuations from overseas posts; however, according to State officials, none of the posts submitted the required lessons learned report. These reports could have been used to modify the post’s guidance on how to best respond to an emergency situation. According to State officials, these reports also could help staff at other posts learn about the challenges faced by the evacuated posts, identify relevant best practices, and prepare for potential future evacuations. We recommended that State take steps to improve the completion and submission of required lessons learned reports following evacuations from overseas posts. State concurred and has developed tools to improve the process. Michael J. Courts, (202) 512- 8980, courtsm@gao.gov annually update their EAPs and (2) Diplomatic Security comprehensively reviews key EAP sections? 2. What efforts is Diplomatic Security making to ensure that posts complete and report completion of required crisis and evacuation drills within required time frames? 3. What steps is State taking to ensure that overseas posts complete required lessons learned reports following evacuations and submit those reports to State headquarters for analysis? Background The Department of Defense (DOD) has long provided military protection and support for the security and safety of U.S. diplomatic missions and personnel during normal operations and emergencies. This support is particularly critical in times of crisis, such as when DOD provides security reinforcements to facilities under threat or assists with evacuations. Several entities within DOD and the Department of State (State) prepare for and coordinate these efforts. Memoranda of Agreement between State and DOD establish frameworks for cooperation on scenarios requiring security augmentation, crisis response, and evacuation for U.S. diplomatic and consular missions overseas. The September 2012 attacks in Benghazi, Libya, and the related wave of protests and threats to U.S. missions in Africa and the Middle East prompted a reexamination of how State and DOD collaborate to provide emergency military protection and other support to overseas posts. The possibility of similar threats and attacks requiring additional DOD support at U.S. diplomatic facilities is spread across a large geographic area. Given the chaos and complexities inherent in such acute crises, and the possibility that unrest could affect multiple U.S. facilities at one time, the need for DOD support will likely continue. From 2013 to 2016, 24 overseas posts experienced some level of increased threat resulting in the evacuation of some or all U.S. personnel. While not all periods of increased threat warrant additional DOD assistance, many do. For instance, in 2014 alone, the U.S. military provided support for embassy reinforcement, military-assisted departures, or evacuations, including in South Sudan, Libya, and Iraq. (Fig. 9 shows one of the DOD units and aircraft that may be used in evacuations or other emergencies.) Key Findings As part of the reorganization following the 2012 attacks, DOD—in coordination with State—increased the military resources provided to overseas posts. According to State and DOD officials, this represented a whole-of-government approach to countering threats to U.S. overseas personnel and facilities. Drawing from existing U.S. Marine Corps and U.S. Army units, DOD created three dedicated military forces to respond to crises across Africa and the Middle East: (1) a Special Purpose Marine Air-Ground Task Force for Crisis Response (SPMAGTF-CR) assigned to DOD’s U.S. Central Command, which supports U.S. diplomatic missions in the Middle East; (2) a SPMAGTF-CR assigned to U.S. Africa Command, which supports U.S. missions in North and West Africa; and (3) the East Africa Response Force, a U.S. Army force that supports U.S. diplomatic missions in East Africa. These forces can provide a variety of functions, from security reinforcement during increased threats, to military-assisted departures and evacuation support. According to DOD officials, in 2014, U.S. Africa Command experienced some logistical challenges associated with covering such a large geographic area, with particular concern should multiple crises occur simultaneously. In 2014, State and DOD announced several changes to the Marine Security Guard (MSG) program, which deploys units of marines to provide certain types of security to U.S. overseas missions. Specifically, in coordination with State’s implementation of the Benghazi Accountability Review Board recommendations, DOD has since increased the size of MSG detachments at all posts, with further increases at high-threat posts; accelerated the deployment of additional detachments to other U.S. diplomatic facilities; and created a Marine Security Guard Security Augmentation Unit based in Quantico, Virginia, to provide additional support on short notice. State and DOD officials reported in June 2017 that they have experienced some challenges associated with deploying the increased MSG units, including obtaining sufficient numbers of marines to fill the desired number of units and logistical and other support at some posts. The agencies continue to work to add certain nonlethal weapons to the MSG equipment set. In 2015, we reported on State and DOD’s post-Benghazi approach to provide additional military support to U.S. overseas posts. While State and DOD had updated some guidance to reflect the new approach, we recommended that the departments more clearly define the roles, responsibilities, and circumstances under which DOD support would be provided and that they update related interagency and departmental guidance. In response to our recommendations, State and DOD have taken steps to update such interagency guidance. These steps included interdepartmental exercises and other collaboration, which resulted in a joint concept paper and a subsequent December 2016 State-DOD memorandum of agreement outlining common terms, roles, responsibilities, and scenarios under which DOD assistance may be requested, among other things. State and DOD officials have indicated that each department will produce further department-specific guidance in the form of a forthcoming diplomatic cable; a DOD update to a 2013 military order; and a new, related DOD instruction. DOD officials expect to issue the updated order by the end of fiscal year 2017 and to complete the instruction in fiscal year 2018. John H. Pendleton, (202) 512- 3489, pendletonj@gao.gov to ensure support to U.S. missions in crisis situations? 2. What is the progress of increasing MSG detachments at identified diplomatic facilities? What challenges exist to providing the personnel or support needed for these additional units? 3. What steps have been taken to ensure that recent State and DOD policy and procedure updates are institutionalized and readily available in future emergencies? Background Issue The Department of State’s (State) Diplomatic Security and overseas posts have processes for Bureau of Diplomatic Security communicating threat information to post personnel (U.S. employees and (Diplomatic Security) is locally employed staff) as well as U.S. citizens in country. However, these responsible for disseminating populations do not always receive important threat information in a timely threat information to posts. At manner. Diplomatic Security’s Office of Intelligence and Threat Analysis, posts, the Emergency Action based at State headquarters, analyzes threat information from multiple Committee (EAC), which includes sources, including the U.S. Intelligence Community, and shares the results the Regional Security Officer of its analysis with posts’ RSOs via cables and other reports. Before (RSO) and Consular Officer, analyzing the information, Diplomatic Security sends an initial notification among other subject matter to posts, according to bureau officials. In addition, posts collect, analyze, experts, disseminates threat and report threat information to headquarters for further distribution. At information to post personnel, as posts, RSOs, at the direction of the EAC, may adjust the post’s security appropriate. In addition, consular posture and disseminate threat information to post personnel. In addition, officers are responsible for if State shares information with the official U.S. community, its policy is to disseminating information to the make the same or similar information available to the nonofficial U.S. nonofficial U.S. community—U.S. community if the threat applies to both. (See fig. 10 for a schematic of citizens living in or traveling State’s threat information dissemination process.) through the affected area. Key Findings State has taken steps to improve RSOs’ reporting of terrorism-related threat information to headquarters. In June 2015, we found that RSOs at some posts designated critical for terrorism were not complying fully with directions from the Secretary of State to use terrorist reporting cables to report all terrorism-related incidents or threats to ensure proper handling and dissemination of the information. For example, we found that in some cases, terrorism-related incidents were not reported in required terrorist reporting cables. We concluded that without comprehensive and accurate reporting, State may lack assurance that it received complete information about terrorist threats that could help prevent and mitigate such threats. We recommended that Diplomatic Security take steps to remind RSOs and posts of the critical importance of using the proper type of cable to report all terrorism-related threats. In December 2015, State sent guidance to all posts specifying that terrorism-related threats must be reported through terrorist reporting cables to ensure appropriate dissemination of the information. Further, in January 2017, State provided reporting instructions to RSOs to help ensure the timely and accurate reporting of all security-related information through the correct reporting channels. Diplomatic Security uses various methods to communicate threat information to overseas post personnel—both U.S. and locally employed staff. However, in our 2016 report on transportation security, we reported that post personnel do not always receive threat information in time to avoid potential threats. We found that several factors can lead to untimely receipt of transportation-related threat information. We recommended that State address these factors. First, some RSOs reported that they send security notices exclusively to state.gov e-mail addresses; however, not all post personnel have state.gov e-mail addresses. In one case, this resulted in post personnel traveling through a prohibited area and an embassy vehicle being attacked with rocks and seriously damaged. Second, limited guidance existed for RSOs on how to promote timely communication of threat information. Third, RSOs and other staff at some posts mistakenly believed that RSOs cannot share threat information with the official U.S. community until consular officials received approval from State to share the same information with the nonofficial U.S. community— a clearance process that can take as long as 8 hours. State reported that it is reviewing the option to forward e-mails outside its system. It also reported that it is developing a two-way emergency notification system that would provide a redundant method for distributing messages during crises. In addition, State updated its policy manual to clarify that RSOs’ sharing of threat information should not be delayed by the clearance process, according to Diplomatic Security officials. To ensure that overseas posts can disseminate information to U.S. citizens in country in the event of an emergency, disaster, or threat, State requires posts to annually conduct a drill of the consular warden system. The consular warden system is a pyramidal contact system designed to reach the U.S. citizen population. However, we found in 2017 that, on average between fiscal years 2013 and 2016, 78 percent of overseas posts did not report the completion of required consular warden system drills. We concluded that this gap in State’s crisis and evacuation preparedness creates a risk that U.S. citizens in country may be insufficiently warned about emergency situations. We recommended that State take steps to improve the completion and reporting of required drills, and State concurred, noting it is forming a working group to review its policies. Michael J. Courts, (202) 512- 8980, courtsm@gao.gov nonofficial U.S. community been in past emergencies? 2. What is the status of State’s plan to use new technology to disseminate information to U.S. personnel and U.S. citizens overseas? 3. What steps has State taken to ensure that posts complete the annual tests of the consular warden system? Background The Department of State’s (State) Counterintelligence Division— under the Office of Investigations and Counterintelligence in the Bureau of Diplomatic Security (Diplomatic Security)—is responsible for overseeing State’s counterintelligence efforts, including assisting Regional Security Officers (RSO) with implementation at overseas posts. Foreign intelligence entities from host nations and third parties are motivated to collect information on a variety of sensitive topics of national importance, including intelligence, defense, and economic information. These entities may attempt to collect information through the use of sophisticated overt, covert, and clandestine means, including human intelligence collection. Because State operates diplomatic posts in many countries, State and other U.S. agency employees at these posts—and their family members—can be targeted by host governments and other entities. National counterintelligence guidance requires that State and other executive agencies implement programs to counter the intelligence threat to U.S. national security and interests by protecting personnel and information. Key Findings State has established several measures to counter the human intelligence threat at overseas posts. Those measures include (1) requiring all State and other agency personnel serving at these posts to report contacts with foreign nationals, particularly those from countries with critical human intelligence posts; (2) prescreening State personnel assigned to certain posts against 13 criteria designed to identify vulnerabilities and directing other agencies to prescreen their personnel; and (3) briefing personnel about what to expect when working and living in potentially hostile intelligence environments. While State prepares personnel at all posts to be aware of human intelligence threats, it uses enhanced counterintelligence strategies for personnel assigned to posts designated as “critical threat” for human intelligence. For example, personnel at critical threat posts receive counterintelligence briefings before departure and annually while serving at these posts. (See fig. 11.) Diplomatic Security assesses counterintelligence efforts at overseas posts through Counterintelligence Post Surveys and Post Security Program Reviews, making recommendations to improve any gaps identified in countermeasures. In addition, as part of a government-wide effort, the Office of the Director of National Intelligence evaluates State’s counterintelligence activities to identify gaps and make recommendations to strengthen State’s counterintelligence program. Michael J. Courts, (202) 512- 8980, courtsm@gao.gov by State domestically and overseas changed in recent years? 2. How does State ensure that personnel are prepared to live and work at posts facing a high or critical human intelligence threat? 3. How does State evaluate the effectiveness of its human intelligence countermeasures domestically and at overseas posts? How does State adjust its countermeasures, if warranted? Background The Department of State (State) created its information security program to address requirements in both the Omnibus Diplomatic Security and Antiterrorism Act of 1986 and the Federal Information Security Modernization Act of 2014 (FISMA). State’s Bureaus of Diplomatic Security (Diplomatic Security) and Information Resource Management (IRM) share responsibility for implementing the information security responsibilities in these laws. In May 2017, Diplomatic Security created the new Directorate for Cyber and Technology Security to consolidate relevant elements from other directorates. Issue Since 1997, GAO has designated federal information security as a government-wide high-risk area and in 2003 expanded this area to include computerized systems supporting the nation’s critical infrastructure. The number of information security incidents reported by federal agencies—including State—increased from 5,503 in fiscal year 2006 to 77,183 in fiscal year 2015. Cyberattacks forced State to shut down its unclassified e-mail system and parts of its public website in both 2014 and 2015 after finding evidence that its systems had been breached. Cyber-based threats to federal systems and information come from unintentional sources, such as natural disasters, coding errors, and careless employees, or from intentional sources, such as disgruntled insiders, hackers, or hostile nations. State’s outdated technology makes it increasingly difficult to ensure security. In addition, State’s information security program is split between two bureaus, each responsible for aspects of the program. Further, State makes extensive use of contractors to perform information security functions such as the monitoring and assessment of systems. Protecting those systems and information from unauthorized disclosure or alteration is particularly important at State, where inappropriate disclosure could cause catastrophic harm to the nation’s diplomacy and security. Key Findings In 2016, we surveyed 24 federal agencies—including State—to identify the sources of malicious attacks on their high-impact systems—any system that holds sensitive information, the loss of which could cause individuals, the government, or the nation catastrophic harm. Consequently, these systems warrant increased security to protect them. Eighteen of these 24 agencies—including State—identified cyberattacks originating from nation states as the most serious and frequent threat to the security of their systems. They identified e-mail cyberattacks as the most serious and frequent delivery method. We made recommendations to the Office of Management and Budget (OMB) to improve security over federal systems, including those at State. State relies on several aging and obsolete technology systems, which require significant resources to operate and create challenges to ensuring information security. We found that State spent about 87 percent of its information technology budget on operating and maintaining its computer systems in 2015. This segment of State’s technology budget increased by approximately $109 million between 2010 and 2015. A State official stated that the increase is largely due to the cost of maintaining the infrastructure, including meeting security requirements. For example, three of State’s visa systems were more than 20 years old. The software for one of these systems is no longer supported by the vendor, creating challenges related to information security. State is planning to upgrade the software to a newer version that also is not supported by the vendor. As a result, we recommended that State identify and plan to modernize or replace legacy systems, consistent with OMB guidance. FISMA directs State and other agencies to designate a Chief Information Security Officer (CISO)—who, at State, reports to the Chief Information Officer in IRM—to develop, document, and implement a department-wide information security program that protects the agency from cyberattacks. In a 2016 report, we evaluated 24 federal agencies to determine whether they followed FISMA and other requirements defining the CISO’s responsibilities. Twenty-two of the 24 agencies—including State—had defined almost all CISO responsibilities properly. However, we found that State had assigned responsibility for responding to information security incidents—a FISMA-designated CISO responsibility—to Diplomatic Security without also defining the CISO’s role in that activity. We concluded that not having a defined role may limit the CISO’s ability to effectively oversee State’s information security incident response process. We recommended that State define the CISO’s role in department policy for ensuring that State had procedures for incident detection, response, and reporting. State concurred with the recommendation and noted that IRM and Diplomatic Security coordinate communications for the incident response process. Gregory C. Wilshusen, (202) 512-6244, wilshuseng@gao.gov contractors, what unique information security challenges, if any, does it face? How does it manage its global cybersecurity program? 2. Given the rapidly changing nature of technology, how does State assess and address threats to its systems and users from changing cyber threats? 3. How will the new Directorate for Cyber and Technology Security improve State’s capability to address cybersecurity issues? 4. To what extent, if any, does assigning CISO responsibilities to multiple bureaus increase State’s risk for duplication, overlap, or fragmentation of information security responsibilities? Background The Secretary of State is generally required by law to convene Accountability Review Boards (ARB) in cases of serious injury, loss of life, or significant destruction of property involving U.S. diplomatic missions or personnel abroad, and in any case of a serious breach of security involving intelligence activities of a foreign government directed at a mission abroad. State has convened 12 ARBs since 1998. ARBs are responsible for reporting their findings about the circumstances of the attack and making recommendations. Issue On September 11, 2012, the acquired facilities at the U.S. Special Mission in Benghazi, Libya, came under attack (see fig. 13). Tragically, four U.S. officials were killed, including the U.S. Ambassador. In response to the attack, the Department of State (State), working with the Department of Defense, formed Interagency Security Assessment Teams to evaluate the security at 19 dangerous posts. Those teams made a number of recommendations to improve physical and procedural security at each post. In addition, an ARB was convened in response to the Benghazi attack; it resulted in 29 recommendations, including several concerning how State manages risk at dangerous posts. Furthermore, two of State’s actions resulting from that ARB led to additional reports that included more recommendations. Key Findings The Interagency Security Assessment Teams assessed all facilities at the 19 posts for any security vulnerabilities—physical or procedural. Their assessments resulted in 287 recommendations including for State to install physical security upgrades, improve security procedures, and construct or acquire new or replacement facilities. State officials told us that State immediately began implementing the recommendations. In addition, State created the new High Threat Programs Directorate within its Bureau of Diplomatic Security (Diplomatic Security) to ensure that those posts facing the greatest risk receive additional, security-related attention. As of June 2017, State reported having addressed 268 of the 287 recommendations. In December 2012, the ARB that State convened to investigate the Benghazi attack released the report of its investigation. The ARB made 23 unclassified recommendations in six areas: (1) overarching security considerations; (2) staffing dangerous posts; (3) training and awareness; (4) security and fire safety equipment; (5) intelligence and threat analysis; and (6) personnel accountability. In addition, the ARB, according to State, made six classified recommendations. State accepted all 29 of the ARB’s recommendations and pledged to fully implement them. For example, in response to the ARB, State expanded the mandatory Foreign Affairs Counter Threat training requirement to all dangerous posts (and, subsequently, to all posts by 2019). As of June 2017, State reported having addressed all but three of the ARB’s recommendations. In response to the Benghazi ARB’s second recommendation, State established a panel to evaluate the organization and management of Diplomatic Security. In May 2013, the panel provided its report to State. It made 35 recommendations in three areas: (1) organization, (2) training, and (3) management. State accepted 29 of the panel’s 35 recommendations. For instance, State did not accept a recommendation for Diplomatic Security to establish a chief of staff position at the GS-15 level within its Principal Deputy Assistant Secretary’s office, noting that no other bureau has an equivalent position. As of June 2017, State reported having addressed 28 of the 29 recommendations it accepted. For example, as a result of the panel’s report, Diplomatic Security is undertaking a strategic review of its staffing. In response to the Benghazi ARB’s fourth recommendation, State established a panel to help Diplomatic Security identify best practices for operating in dangerous environments. The panel provided its report to State in August 2013. It made 40 recommendations in 12 areas, including organization and management; program criticality and acceptable risk; lessons learned; training and human resources; intelligence, threat analysis, and security assessments; and host nations and guard forces’ capability enhancement, among others. State accepted 38 of the panel’s 40 recommendations. State did not accept the panel’s first recommendation, that it establish an Under Secretary for Diplomatic Security. It asserted that doing so would compound the “stove-piping” that the ARB and others reported in the wake of the Benghazi attack. In addition, State did not accept the panel’s 13th recommendation, which stated that waivers to established security standards should only be provided subsequent to the implementation of all mitigating measures. State noted that in time-sensitive situations, exceptions might be appropriate when some mitigating measures are in place. As of June 2017, State reported having addressed 36 of the 38 recommendations it accepted. For example, as a result of the panel’s report, Diplomatic Security created a Strategic Advisory Unit within Diplomatic Security to advise and perform ad hoc analysis for the Assistant Secretary. Michael J. Courts, (202) 512- 8980, courtsm@gao.gov recommendations? 2. What effect, if any, has implementing the Benghazi-related recommendations had on the security of diplomatic facilities, personnel, and information? 3. Since 1998, 12 attacks have resulted in the formation of ARBs. What is the status of all recommendations made by the 12 ARBs? This special publication is largely based on previously published GAO work. To generate a list of possible key issues, we reviewed past products concerning the Department of State’s (State) Bureau of Diplomatic Security (Diplomatic Security), by GAO, State’s Inspector General, and the Congressional Research Service. Working with GAO’s subject matter experts, we narrowed the list of issues and identified potential oversight questions. We interviewed cognizant agency officials in Washington, D.C., and Arlington, Virginia, from State—including from the Bureaus of Management, Diplomatic Security, Overseas Buildings Operations (OBO), and Information Resource Management—the Department of Defense, and the U.S. Agency for International Development. We used these interviews to refine our key issues, gain updated information and data, follow up on actions taken regarding our past recommendations, and identify relevant lessons learned. We also worked with the officials to determine what portions of our past classified or restricted work could be presented in a public product. We then synthesized this information to provide a balanced and comprehensive overview for each issue and to formulate oversight questions. We updated relevant data when possible and performed additional data reliability assessments when necessary. These additional assessments were conducted only on data that we had not previously reported; all other data were assessed as part of our work for our previously published reports. We assessed the reliability of various types of data— funding, staffing, and training—from Diplomatic Security and, as appropriate, its partner agencies. Specifically, we assessed the reliability of the following data: Diplomatic Security bureau managed funds, from fiscal years 2010 to 2016. (We used previously reported data for fiscal years 1998 to 2007, and updated previously reported data for fiscal years 2008 to 2009.) Dedicated allocations to Diplomatic Security and OBO for physical security at diplomatic facilities for fiscal years 2015 to 2016. (We used previously reported data for fiscal years 2009 to 2014.) Diplomatic Security staffing numbers for its workforce of direct-hire employees, other U.S. government support staff, and contractors. (We used previously reported data for 1998, 2008, and 2011.) Number of students who completed Diplomatic Security-provided Foreign Affairs Counter Threat training for fiscal years 2011 to 2016. (We used previously reported data for fiscal years 2006 to 2010.) To assess the reliability of the data, we interviewed cognizant officials about how the data were produced and their opinion of the quality of the data, specifically the data’s completeness, accuracy, and comparability to previously reported data. We also worked with the cognizant officials to identify any limitations associated with the data and to mitigate those issues or note these limitations in our report, as appropriate. In addition, we updated previously reported data on the percentage of Diplomatic Security employees who do not speak and read foreign languages at the level required by their positions and interviewed knowledgeable officials to corroborate and clarify the data. We determined that the data mentioned above were sufficiently reliable for our purposes. We prepared this report under the authority of the Comptroller General to conduct work on his initiative because of broad congressional interest in the oversight and accountability of providing security to U.S. personnel working at diplomatic missions and to assist Congress with its oversight responsibilities. We conducted this performance audit from January 2017 to September 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. U.S. diplomatic missions have faced numerous attacks that were followed by legal and policy changes. Between 1998 and 2016, there were 419 attacks against U.S. diplomatic interests, according to the Department of State’s Bureau of Diplomatic Security. Several of the deadly attacks against U.S. personnel and facilities overseas were followed by new legislation, independent reviews with corresponding recommendations, or both. For example, the Omnibus Diplomatic Security and Antiterrorism Act of 1986, which followed the attacks against the U.S. embassy in Beirut, Lebanon, in 1983, established the Bureau of Diplomatic Security and set forth its responsibility for post security and protective functions abroad. The Secure Embassy Construction and Counterterrorism Act of 1999, which followed the Africa embassy bombings of 1998, set requirements for colocation of all U.S. government personnel at an overseas diplomatic post (except those under the command of an area military commander) and for a 100-foot perimeter setback for all new U.S. diplomatic facilities. In addition, the Secretary of State is generally required by law to convene an Accountability Review Board (ARB) following incidents that result in serious injury, loss of life, or significant destruction of property involving U.S. diplomatic missions or personnel abroad. An ARB is responsible for reporting its findings about the circumstances of an attack and making recommendations as appropriate. Since 1998, 12 attacks have resulted in the formation of an ARB, the most recent of which was formed in response to the 2012 attacks in Benghazi. (See fig. 14 for a time line of selected attacks and related laws and reports.) The Department of State’s (State) Bureau of Diplomatic Security (Diplomatic Security) has responsibilities set forth in State’s Foreign Affairs Manual; to help meet its responsibilities, the bureau relies on multiple organizational components within State. (Fig. 15 highlights State offices with key security responsibilities.) State also collaborates with other U.S. government agencies to secure U.S. missions overseas. As established by the 1961 Vienna Convention on Diplomatic Relations, host country governments are required to protect the diplomatic personnel and missions of foreign governments. More than two decades later, following an attack against the U.S. embassy in Beirut, Lebanon, Congress enacted the Omnibus Diplomatic Security and Antiterrorism Act of 1986 to provide enhanced diplomatic security and to combat international terrorism. The act assigns the Secretary of State responsibility for providing security for all diplomatic operations, in consultation with the heads of other federal agencies that have personnel or missions abroad. The act also created Diplomatic Security to provide a broad range of security and protective functions internationally and domestically. In addition, the act specifies that other federal agencies will cooperate with State to fulfill all security operations of a diplomatic nature. The Bureau of Diplomatic Security is State’s security and law enforcement arm. The bureau’s eight operational directorates—listed below—are collectively known as the Diplomatic Security Service. In addition, Diplomatic Security has three administrative offices that assist the mission: Executive Office, Strategic Advisory Unit, and Public Affairs. International Programs: Directs the formulation, planning, coordination, policy development, and implementation of security programs that protect U.S. diplomatic missions for most posts. Manages high-profile security programs such as the Embassy Local Guard Program, Emergency Action Planning, the Worldwide Protective Services Program, Surveillance Detection, and the Marine Security Guard Program. High Threat Programs: Directs the formulation, planning, coordination, policy development, and implementation of security programs that protect U.S. diplomatic missions at high-threat, high-risk posts. Manages security programs to include personnel recovery, tactical and strategic planning, special operations, evacuation operations, and State’s responses to international crises at high-threat, high-risk posts. Diplomatic Security created this directorate following the 2012 attack on Benghazi to ensure that those posts facing the greatest risk—now designated as high-threat, high-risk posts—received additional, security-related attention. Domestic Operations: Oversees criminal investigations domestically and abroad related to State personnel, facilities, and visiting foreign dignitaries, including passport and visa violations, counterintelligence investigations, and use of force incidents involving State personnel. Oversees the protection of the Secretary of State, the U.S. Ambassador to the United Nations, foreign dignitaries, and other persons of interest. Training: Formulates and implements all security and law enforcement training programs and policies for Diplomatic Security. Directs the formulation, coordination, and implementation of security and law enforcement training programs that promote the professional development of Diplomatic Security personnel. Oversees specialized security training at overseas posts on a regular and emergency basis and provides emergency security support to posts abroad during periods of high threat, crisis, or natural disaster. Threat Investigations and Analysis: Directs, coordinates, and conducts the analysis of terrorist threats and hostile activities directed against U.S. government personnel, facilities, and interests abroad. Conducts protective intelligence investigations, coordinates foreign- government and private-sector requests for assistance relating to terrorist incidents, and directs the operations of the Diplomatic Security Command Center and the Overseas Security Advisory Council. Security Infrastructure: Manages all matters relating to security infrastructure in Diplomatic Security functional areas of personnel security and suitability and insider threats. Formulates strategic operational planning, priorities, and funding for security infrastructure operations. Countermeasures: Manages, plans, and develops policy for worldwide physical and technical security countermeasures programs. Represents State in negotiations with other federal agencies on issues regarding physical and technical security countermeasures. Directs the offices of Physical Security Programs, Security Technology, and Diplomatic Courier Service. Cyber and Technology Security: Manages cyber and technical elements of State’s security program. In May 2017, Diplomatic Security created this new directorate by consolidating cyber technology and investigative support elements from other directorates. The goal is to increase State’s ability to enable secure innovation in areas such as e-mail messaging services, Wi-Fi, cloud services, mobile communications, and social media. To complete parts of its mission, Diplomatic Security collaborates with other State entities, most notably the overseas missions and the Bureaus of Overseas Buildings Operations (OBO) and Information Resource Management (IRM). Overseas Missions: At posts, the Chief of Mission (Ambassador or Principal Officer), is ultimately responsible for the security of facilities, information, and all personnel under chief-of-mission authority. He or she is assisted by Diplomatic Security, which is represented at post by a head special agent known as the Regional Security Officer (RSO). RSOs—working with assistant RSOs and other security personnel— are responsible for implementing a wide range of duties such as protecting personnel and property, documenting threats and residential vulnerabilities, and identifying possible mitigation efforts to address those vulnerabilities. The overseas missions also play a role in setting post-specific security measures and funding some physical security upgrades, with approval from Diplomatic Security. In addition, each post has an Emergency Action Committee (EAC) that provides guidance in preparing for and responding to potential changes in risk that might affect the safety and security of the post and the American citizens in country. The EAC may include the Ambassador, Deputy Chief of Mission, Principal Officer, Defense Attaché, Political Officer, Economic Officer, RSO, Management Officer, Consular Officer, Public Affairs Officer, Human Resources Officer, Medical Officer, U.S. Agency for International Development (USAID) Mission Director, Community Liaison Office Coordinator, and others, including non- State officials, as appropriate. Further, as the 2005 Iraq Accountability Review Board (ARB) noted, all mission personnel bear “personal responsibility” for their own and others’ security. Overseas Buildings Operations (OBO): OBO manages the acquisition, design, construction, maintenance, and sale of U.S. government diplomatic property abroad. Through the Capital Security Construction Program, OBO replaces and constructs diplomatic facilities to provide U.S. embassies and consulates with safe, secure, functional, and modern buildings. In addition, OBO tracks information on State’s real properties, including residences; provides funding for certain residential security upgrades; and funds and manages the Soft Targets Program, State’s program for providing security upgrades to schools attended by U.S. government dependents and off-compound employee association facilities. Information Resource Management (IRM): State’s Chief Information Officer leads IRM to provide the information technology and services State needs to carry out its foreign policy mission. The Federal Information Security Modernization Act of 2014 (FISMA) directs the heads of federal agencies, including State, to designate a Chief Information Security Officer to develop, document, and implement a department-wide information security program. In addition, the Office of Management Policy, Rightsizing, and Innovation (M/PRI) tracks State’s implementation of ARB recommendations. Diplomatic Security, OBO, IRM, and M/PRI all report to the Under Secretary for Management. Diplomatic Security coordinates its work overseas with a number of U.S. government entities and agencies: The Overseas Security Policy Board (OSPB) develops security standards for executive agencies working overseas. Chaired by the Assistant Secretary for Diplomatic Security, OSPB includes representatives from approximately 20 U.S. agencies with personnel overseas, including intelligence, foreign affairs, and other agencies. State incorporates the OSPB’s physical security standards in the Foreign Affairs Handbooks. Diplomatic facilities overseas—whether permanent, interim, or temporary—and residences are required to meet the standards applicable to them. The OSPB standards vary by facility type, date of construction or acquisition, and threat level. If facilities do not meet all applicable standards, posts are required to request waivers, exceptions, or both. The Department of Defense (DOD) has long provided military protection and support for the security and safety of U.S. diplomatic missions and personnel during normal operations and emergencies. For example, DOD provides Marine Security Guards at some U.S. diplomatic missions to help protect U.S. personnel, classified material, and property. DOD support is particularly critical in times of crisis, such as when DOD provides security reinforcements to facilities under threat or assists with evacuations. Several entities within State, DOD, and the military branches prepare for and coordinate these efforts. Memoranda of Agreement between State and DOD establish frameworks for cooperation on scenarios requiring security augmentation, crisis response, and evacuation for U.S. diplomatic and consular missions overseas. USAID maintains its own Office of Security, which is responsible for the physical security of its facilities and coordination with Diplomatic Security. Other agencies operating overseas—such as the Departments of Commerce or the Treasury—may also have security offices, but none of them operating under chief-of-mission authority maintain their own facilities outside of Diplomatic Security’s responsibility. The Department of State’s Bureau of Diplomatic Security (Diplomatic Security) employs a broad workforce of over 51,000 individuals to carry out its mission and activities. Its workforce includes direct-hire security specialists and management support staff, military support, and contractors. See table 2 for a description of each position and a comparison of Diplomatic Security staffing levels in fiscal years 2008, 2011, and 2017. Over the course of our work on the Department of State’s (State) Bureau of Diplomatic Security (Diplomatic Security) and related efforts, we have identified conditions that affect the success of its programs and recommended a range of improvements that should be considered in program planning and implementation. For example, we have made recommendations on the need for State to address gaps in its security- related activities, standards, and policies, such as developing a process to ensure that mitigating steps agreed to in granting waivers and exceptions for older, acquired, and temporary work facilities have been implemented. We have also made recommendations on the need for improved information sharing between Diplomatic Security directorates, such as sharing information with each other on the residential security exceptions they have processed to help provide Diplomatic Security with a clearer picture of security vulnerabilities at residences and enable it to make better risk management decisions. State and its partner agencies have generally concurred with our recommendations and have taken steps to address a number of them, several of which are noted in the enclosures. In addition, we have identified several existing conditions— such as gaps in State oversight of personnel compliance with mandatory security training and many overseas diplomatic residences not meeting all applicable security standards—that continue to challenge the U.S. government’s ability to protect its people, property, and information around the world. In letters addressed to the Secretary of State, we identified which of these recommendations we believe should be given high priority for implementation. As of August 14, 2017, State had 27 open recommendations that have been deemed by GAO as being among the highest priorities for implementation. Of the 27 priority recommendations, 24 are listed below (see table 3) and are related to this report in four areas, as follows: Security of overseas personnel. Fully implementing GAO’s priority recommendations on personnel security, such as those related to the Foreign Affairs Counter Threat (FACT) training, would help ensure that State personnel are prepared to operate in dangerous situations. Security of overseas facilities. Fully implementing GAO’s priority recommendations on physical security at overseas posts, such as those regarding risk management associated with physical security of diplomatic facilities, will improve the safety and security of personnel serving overseas, particularly in high-threat locations. Transportation security. Fully implementing recommendations related to transportation security would improve State’s efforts to manage transportation-related security risks overseas. Information security. Fully implementing GAO’s priority recommendation regarding obsolete computer systems will improve State’s ability to secure its information technology systems and access to potentially sensitive information. GAO will continue to monitor State’s progress in implementing these recommendations and will update their status on the GAO website at http://www.gao.gov. This appendix provides a list of recent GAO products related to each enclosure. Copies of most products can be found on our website: http://www.gao.gov/. GAO also has done work on some of the key issues identified in the enclosures that resulted in Sensitive But Unclassified or Classified products. (Report numbers with an SU suffix are Sensitive But Unclassified, and those with a C suffix are Classified.) Sensitive But Unclassified and Classified reports are available to personnel with the proper clearance and need-to-know, upon request. For a copy of a Sensitive But Unclassified or Classified report, please call or e-mail the point of contact listed in the related enclosure. State Department: Diplomatic Security Challenges. GAO-13-191T. Washington, D.C.: November 15, 2012. State Department: Diplomatic Security's Recent Growth Warrants Strategic Review. GAO-10-156. Washington, D.C.: November 12, 2009. Department of State: Foreign Language Proficiency Has Improved, but Efforts to Reduce Gaps Need Evaluation. GAO-17-318. Washington, D.C.: March 22, 2017. State Department: Diplomatic Security Challenges. GAO-13-191T. Washington, D.C.: November 15, 2012. State Department: Diplomatic Security's Recent Growth Warrants Strategic Review. GAO-10-156. Washington, D.C.: November 12, 2009. Embassy Construction: State Needs to Better Measure Performance of Its New Approach. GAO-17-296. Washington, D.C.: March 16, 2017. Afghanistan: Embassy Construction Cost and Schedule Have Increased, and Further Facilities Planning Is Needed. GAO-15-410. Washington, D.C.: May 19, 2015. Diplomatic Security: Overseas Facilities May Face Greater Risks Due to Gaps in Security-Related Activities, Standards, and Policies. GAO-14-655. Washington, D.C.: June 25, 2014. Diplomatic Security: Overseas Facilities May Face Greater Risks Due to Gaps in Security-Related Activities, Standards, and Policies. GAO-14-380SU. Washington, D.C.: June 5, 2014. Diplomatic Security: State Department Should Better Manage Risks to Residences and Other Soft Targets Overseas. GAO-15-700. Washington, D.C.: July 9, 2015. Diplomatic Security: State Department Should Better Manage Risks to Residences and Other Soft Targets Overseas. GAO-15-512SU. Washington, D.C.: June 18, 2015. Diplomatic Security: State Should Enhance Its Management of Transportation-Related Risks to Overseas U.S. Personnel. GAO-17-124. Washington, D.C.: October 4, 2016. Diplomatic Security: State Should Enhance Management of Transportation-Related Risks to Overseas U.S. Personnel. GAO-16-615SU. Washington, D.C.: September 9, 2016. Diplomatic Security: Options for Locating a Consolidated Training Facility. GAO-16-139T. Washington, D.C.: October 8, 2015. Diplomatic Security: Options for Locating a Consolidated Training Facility. GAO-15-808R. Washington, D.C.: September 9, 2015. Countering Overseas Threats: Gaps in State Department Management of Security Training May Increase Risk to U.S. Personnel. GAO-14-360. Washington, D.C.: March 10, 2014. Countering Overseas Threats: Gaps in State Department Management of Security Training May Increase Risk to U.S. Personnel in High-Threat Countries. GAO-14-185SU. Washington, D.C.: February 26, 2014. Diplomatic Security: Expanded Missions and Inadequate Facilities Pose Critical Challenges to Training Efforts. GAO-11-460. Washington, D.C.: June 1, 2011. Embassy Evacuations: State Should Take Steps to Improve Emergency Preparedness. GAO-17-714. Washington, D.C.: July 17, 2017. Embassy Evacuations: State Should Take Steps to Improve Emergency Preparedness. GAO-17-560SU. Washington, D.C.: June 28, 2017. Interagency Coordination: DOD and State Need to Clarify DOD Roles and Responsibilities to Protect U.S. Personnel and Facilities Overseas in High-Threat Areas. GAO-15-219C. Washington, D.C.: March 4, 2015. Embassy Evacuations: State Should Take Steps to Improve Emergency Preparedness. GAO-17-714. Washington, D.C.: July 17, 2017. Embassy Evacuations: State Should Take Steps to Improve Emergency Preparedness. GAO-17-560SU. Washington, D.C.: June 28, 2017. Diplomatic Security: State Should Enhance Its Management of Transportation-Related Risks to Overseas U.S. Personnel. GAO-17-124. Washington, D.C.: October 4, 2016. Diplomatic Security: State Should Enhance Management of Transportation-Related Risks to Overseas U.S. Personnel. GAO-16-615SU. Washington, D.C.: September 9, 2016. Combating Terrorism: Steps Taken to Mitigate Threats to Locally Hired Staff, but State Department Could Improve Reporting on Terrorist Threats. GAO-15-458SU. Washington, D.C.: June 17, 2015. Federal Chief Information Security Officers: Opportunities Exist to Improve Roles and Address Challenges to Authority. GAO-16-686. Washington, D.C.: August 26, 2016. Information Technology: Federal Agencies Need to Address Aging Legacy Systems. GAO-16-468. Washington, D.C.: May 25, 2016. Information Security: Agencies Need to Improve Controls over Selected High-Impact Systems. GAO-16-501. Washington, D.C.: May 18, 2016. Federal Information Security: Agencies Need to Correct Weaknesses and Fully Implement Security Programs. GAO-15-714. Washington, D.C.: September 29, 2015. Information Security: Agencies Need to Improve Oversight of Contractor Controls. GAO-14-612. Washington, D.C.: August 8, 2014. State Department Telecommunications: Information on Vendors and Cyber-Threat Nations. GAO-17-688R. Washington, D.C.: July 27, 2017. Diplomatic Security: Overseas Facilities May Face Greater Risks Due to Gaps in Security-Related Activities, Standards, and Policies. GAO-14-655. Washington, D.C.: June 25, 2014. Diplomatic Security: Overseas Facilities May Face Greater Risks Due to Gaps in Security-Related Activities, Standards, and Policies. GAO-14-380SU. Washington, D.C.: June 5, 2014. In addition to the contact named above, the following individuals made key contributions to this report: Thomas Costa (Assistant Director), Miriam Carroll Fenton (Analyst-in-Charge), Esther Toledo, Mason Calhoun, David Dayton, Neil Doherty, David Hancock, Thomas Johnson, Owen Starlin, and Sally Williamson. The following individuals provided technical assistance and additional support: Joshua Akery, J.P. Avila-Tournut, Jeffrey Baldwin-Bott, Amanda Bartine, John Bauckman, Aniruddha Dasgupta, Mark Dowling, Wayne Emilien, Ian Ferguson, Justin Fisher, Brian Hackney, Brandon Hunt, Guy LoFaro, Michael Rohrback, and Martin Wilson. In addition, each GAO report cited in the preceding enclosures and in appendix VI includes a list of staff who contributed to that product.
Terrorist attacks against U.S. diplomats and personnel overseas have led to increased attention of State's diplomatic security efforts. In this special publication, GAO identifies key issues affecting Diplomatic Security for Congressional oversight. These issues were identified from a body of related GAO work and State and other reports. GAO also interviewed U.S. officials from State and other agencies to obtain their views on key issues, obtain updated information and data, and follow up on actions they have taken on past GAO and other oversight report recommendations. In response to increasing threats to U.S. personnel and facilities at overseas diplomatic posts since 1998, the Department of State (State) has taken a number of steps to enhance its risk management and security efforts. State's Bureau of Diplomatic Security (Diplomatic Security) leads many of these efforts with assistance from other bureaus and U.S. government agencies. Given the ongoing threats and the amount of resources needed to counter them, GAO has identified 11 key issues regarding Diplomatic Security that warrant significant Congressional oversight to monitor the cost, progress, and impact: Diplomatic Security Funding : Diplomatic Security funding has increased considerably in reaction to a number of security incidents overseas and domestically. In fiscal year 2016, total funding for Diplomatic Security operations--which includes its bureau managed funds as well as other funding such as personnel salaries--was almost $4.8 billion. Diplomatic Security Staffing Challenges : Diplomatic Security's workforce--including 3,488 direct-hire, 1,989 other U.S. government, and 45,870 contract personnel--continues to grow. However, potential challenges exist regarding the distribution of domestic and overseas positions, posting fully qualified individuals in the assignments with the greatest needs, and ongoing efforts to fill language-designated positions. Physical Security of U.S. Diplomatic Facilities : Diplomatic Security and the Bureau of Overseas Buildings Operations collaborate to meet safety standards when constructing new embassies and mitigating risks at existing facilities. However, GAO made recommendations to address gaps in State's security related activities and processes. Physical Security of Diplomatic Residences and Other Soft Targets : State has taken steps to address residential security vulnerabilities and manage risks at schools and other soft targets overseas. However, GAO recommended actions to address weaknesses in State's efforts. Security Training Compliance : While State has robust security training requirements, it lacks consistent monitoring and enforcement processes, particularly for its Foreign Affairs Counter Threat training and for security refresher briefings at posts. Embassy Crisis and Evacuation Preparedness : Gaps in State's implementation and monitoring of crisis and evacuation preparedness could endanger staff assigned to overseas posts and the family members accompanying them. GAO has recommended actions to address these issues. Department of Defense (DOD) Support to U.S. Diplomatic Missions : Following the Benghazi attacks, DOD increased its support to U.S. diplomatic missions by creating dedicated military forces to respond to crises and expanding the Marine Security Guard program at overseas missions. However, State and DOD reported that they have experienced some logistical and other challenges. Dissemination of Threat Information : State has processes for communicating threat information to post personnel and U.S. citizens in-country. However, post personnel--including locally employed staff--have not always received important information in a timely manner. GAO has recommended steps State needs to take to address this concern. Countering Human Intelligence Threats : Foreign intelligence entities from host nations and third parties are motivated to collect information on U.S. operations and intentions. State has established measures to counter the human intelligence threat and works with other U.S. government agencies to identify and assess this threat. Ensuring Information Security : GAO has designated federal information security as a government-wide high-risk area and made recommendations to address these issues. State faces evolving threats and challenges to maintaining obsolete technology, defining clear roles and responsibilities for information security, and overseeing technology contractors. Status of Recommendations Made in Reports following the Benghazi Attack : In response to the Benghazi attack, State formed interagency teams to evaluate the security at 19 dangerous posts, convened an Accountability Review Board (ARB) to investigate the attack, and established panels to conduct further assessments. As of June 2017, State reported having addressed recommendations as follows: 268 of 287 made by the interagency teams, 26 of 29 by the ARB, and 64 of 75 by the panels. While State has taken steps to close recommendations made in past GAO reports, GAO identified 27 open recommendations from these reports (as of August 2017) that it believes should be given high priority for implementation. Of the 27 priority recommendations, 24 were related to diplomatic security.
Cuba is the largest Caribbean nation, with a population of more than 11 million people and an area of about 111,000 square kilometers (slightly smaller than Pennsylvania). Cuba lies approximately 90 miles south of Key West, Florida. See figure 1 for a map of Cuba. According to the World Bank, Cuba’s gross domestic product (GDP) was estimated to be $77.15 billion in 2013. However, widespread uncertainty exists about the accuracy of data on Cuba’s economy, including GDP figures. As a nonmarket economy where the government is largely responsible for setting prices and wages, it is challenging to assess Cuba’s economic performance using typical economic indicators. Cuba also has a dual-currency system that distorts information on the Cuban economy because multiple exchange rates are used internally within the country, which results in the mispricing of various transactions, among other things. In addition, Cuba is not a member of international financial institutions, such as the World Bank and the International Monetary Fund, and is thus not subject to regular reviews of its economy, including its economic and financial data, as is typical of countries that are members of these organizations. Cuba’s key trading partners are Venezuela, the European Union, and China. Cuba is a net importer of goods. According to reporting by the U.S. International Trade Commission (USITC), Cuba imported a total of $9.3 billion in goods in 2014. Cuba relies on imports to meet its energy needs. Cuba also imports almost 80 percent of its food. Although Cuba is a net importer of goods, it is a net exporter of services. According to the USITC, Cuba’s exports of commercial services were $12.3 billion in 2014 compared with service imports of $2.5 billion in that same year, a net surplus of $9.8 billion. Cuba’s largest service exports are medical services and tourism. The Cuban government has sought to attract additional foreign investment in recent years. In March 2014, the Cuban government passed an updated law governing foreign investment and, in November 2014, published a list of 246 projects for which it was seeking a total of over $8 billion in foreign investment. In November 2015, the Cuban government published an updated list of 326 projects, for a total of $8.2 billion in foreign investment opportunities. Much of the foreign investment in Cuba is concentrated in the tourism, energy, and mining sectors. For example, Spanish companies have been involved in 19 hotel projects in Cuba since 2003, according to the USITC. As another example, the Canadian firm Sherritt has been involved in a joint venture project in Cuba for more than 20 years, which involves nickel mining as well as oil and power operations. Within 3 years after coming to power in 1959, Fidel Castro consolidated his control over Cuba, establishing a communist state characterized by a one-party political system and a centrally planned economy. Fidel Castro led the country from 1959 until 2006 when he provisionally stepped down due to poor health, and his brother, Raúl Castro, assumed the presidency. Cuba’s legislature officially selected Raúl Castro as President in 2008 and reelected him in 2013. The relationship between the United States and Cuba deteriorated quickly after Fidel Castro came to power as the new Cuban government established an authoritarian communist state and pursued close relations with the Soviet Union. In addition, after coming to power, the Castro government seized $1.9 billion of U.S. property on the island. In the decades since, the relationship between the two countries has been characterized by mutual antagonism and mistrust. Throughout this period, the U.S. government has repeatedly raised concerns about the lack of political and other freedoms in Cuba and has cited the Cuban government for a range of human rights abuses. For the first 3 decades after the Castro regime came to power, the Cuban economy was dependent on billions of dollars in annual subsidies from the Soviet Union. However, after the breakup of the Soviet Union in 1991, these subsidies stopped, and Cuba entered a period of significant economic hardship. More recently, the Cuban government has relied heavily on subsidies from Venezuela to support its economy. Among other things, Venezuela has provided Cuba with around 100,000 barrels of oil per day, some of which the Cuban government refines and then sells on the world market to generate hard currency. However, as Venezuela’s political and economic situation has deteriorated, these subsidies have reportedly declined. In July 2016, the Cuban government announced the need to prepare for energy shortages and other economic challenges. The Cuban government controls most sectors of the economy and employs the majority of the Cuban workforce. In the years after the Castro regime came to power, the Cuban government shut down most forms of private sector activity. However, the Cuban government has periodically allowed some private sector activity. For example, in the aftermath of the Soviet Union’s collapse, the Cuban government liberalized some private sector activity to combat the severe economic recession the country faced; however, as the economy stabilized, the Cuban government reversed many of these reforms. Since the 1960s, the United States has maintained an embargo on Cuba through various laws, regulations, and presidential proclamations that restricts trade, travel, and financial transactions. Key legislation related to the embargo includes the following: Trading with the Enemy Act of 1917 (TWEA). TWEA granted the President broad authority to impose embargoes on foreign countries during times of war and was amended in 1933 to also grant this authority during times of a presidentially declared national emergency. The International Emergency Economic Powers Act of 1977 amended section 5(b) of TWEA, again limiting the President’s authority to times of war but allowing the President’s continued exercise of his national emergency authority with respect to the ongoing Cuba embargo. This act required that the President determine on an annual basis that maintaining the Cuba embargo is in the national interest of the United States. Foreign Assistance Act of 1961. The Foreign Assistance Act contains provisions barring any assistance to Cuba and authorizing the President to establish and maintain an economic embargo on Cuba. Section 620(a) of the act, codified at 22 U.S.C. § 2370(a), prohibits any U.S. foreign assistance to the “present” government of Cuba and authorizes the President to establish and maintain a total embargo on all trade between the United States and Cuba as a means of carrying out the assistance prohibition. Cuban Democracy Act of 1992 (CDA). The CDA further restricted U.S. trade with Cuba and called on the President to encourage other countries to limit their trade with Cuba as well as their extension of credit and assistance to Cuba. The law permitted U.S. exports of medicine and medical supplies to Cuba, with certain exceptions. However, such exports must be authorized through specific licenses, and the U.S. government must be able to verify through onsite inspection and other appropriate means that the items are used for their intended purposes and for the benefit and use of the Cuban people. The law also restricted trade with Cuba by foreign subsidiaries of U.S. firms and prohibited any vessel unlicensed by the Department of the Treasury (Treasury) from (1) loading or unloading freight in a U.S. port within 180 days after leaving a Cuban port where it engaged in trade of goods or services or (2) entering a U.S. port while carrying goods or passengers to or from Cuba or goods in which Cuba or a Cuban national had an interest. Cuban Liberty and Democratic Solidarity Act of 1996 (LIBERTAD). Commonly known as the Helms-Burton Act, LIBERTAD defined and codified the embargo as it was in effect on March 1, 1996. LIBERTAD authorizes the President to suspend the embargo only if he or she determines that a transition Cuban government is in power. Furthermore, LIBERTAD requires the President to terminate the embargo if he or she determines that a democratically elected Cuban government is in power. In addition, the law prohibits U.S. persons, permanent resident aliens, and U.S. agencies from knowingly financing any transactions involving property of U.S. nationals confiscated by the Cuban government; permits U.S. nationals to sue in U.S. courts persons trafficking in such confiscated property (this authority has been suspended by the President since enactment); and provides for denying entry into the United States to aliens determined by the Secretary of State to be involved in such trafficking. Trade Sanctions Reform and Export Enhancement Act of 2000 (TSRA). TSRA prohibits the President from imposing new, unilateral agricultural and medical sanctions against any foreign country, including Cuba, unless approved by a congressional joint resolution, and requires termination of existing unilateral agricultural or medical sanctions unless continued by a congressional joint resolution. In addition, TSRA authorizes, pursuant to a 1-year license and other requirements, the export of agricultural commodities (including food) to Cuba, subject to specific conditions. TSRA also prohibits the U.S. government from providing Cuba with foreign assistance, export assistance, and any credit or guarantees for exports. In addition, TSRA prohibits U.S. private financing or payment of agricultural commercial sales to Cuba, except where payment is made with cash in advance, interpreted by Treasury to mean payment before the transfer of title to, and control of, exported agricultural commodities, or where financing is from third-country financial institutions. Finally, TSRA prohibits the licensing of travel to Cuba for tourist activities by persons subject to U.S. jurisdiction. Key regulations related to the embargo include the following: The Cuban Assets Control Regulations (CACR). The CACR, which Treasury issued in 1963 under the President’s broad authority in section 5(b) of TWEA and the Foreign Assistance Act, prohibit persons subject to U.S. jurisdiction from engaging in transactions involving property in which Cuba or a Cuban national has an interest, including transactions related to travel, remittances, humanitarian assistance, and financial services, without authorization from Treasury. The Export Administration Regulations (EAR). The Department of Commerce’s (Commerce) EAR are issued under the authority of the Export Administration Act of 1979 and the International Emergency Economic Powers Act. U.S. exports and reexports to Cuba subject to the EAR must be authorized by Commerce. Applications for licenses for export to Cuba of items subject to the EAR fall mostly under a general policy of denial, although some items are exempt from this policy. Over time, the embargo has been modified through legislation and regulatory amendments, which have alternately eased and tightened aspects of the embargo. For example, as noted above, TSRA’s passage in 2000 loosened prohibitions on the export of U.S. agricultural commodities, including food, to Cuba. In 2004, the Bush administration made regulatory changes to tighten restrictions on travel, remittances, and gift parcels to Cuba. For example, Treasury reduced the permitted frequency of family visits to Cuba from once every 12 months to once every 3 years. Subsequently, the Obama administration made regulatory changes to loosen certain embargo restrictions in 2009 and 2011. For example, in September 2009, Treasury removed the previously established restrictions on the frequency and duration of travel to Cuba to visit close relatives. Figure 2 provides a timeline of key events in the U.S.- Cuba relationship, including changes in the embargo, up until December 2014. On December 17, 2014, President Obama announced a major shift in U.S. policy on Cuba intended to increase engagement between the two countries, among other things. Specifically, the administration’s new policy called for establishing diplomatic relations with Cuba, adjusting regulations to more effectively empower the Cuban people, facilitating an expansion of travel under general licenses for the 12 existing categories of travel to Cuba authorized by law, facilitating remittances to Cuba by U.S. persons, authorizing expanded commercial sales/exports from the United States of certain goods and services, authorizing U.S. citizens to import additional goods from Cuba, facilitating authorized financial transactions between the United States and Cuba, initiating new efforts to increase Cubans’ access to communications and their ability to communicate freely, updating the application of Cuba sanctions in third countries, pursuing discussions with the Cuban and Mexican governments to discuss the unresolved maritime boundary in the Gulf of Mexico, initiating a review of Cuba’s designation as a State Sponsor of Terrorism, and addressing Cuba’s participation in the 2015 Summit of the Americas. Since December 2014, the U.S. government has undertaken a number of efforts to implement various aspects of the administration’s policy. For example, after several rounds of negotiation, the United States and Cuba reestablished diplomatic relations on July 20, 2015, and the two countries’ Interests Sections reopened as embassies. On October 14, 2016, President Obama issued a presidential policy directive on the normalization of relations between the United States and Cuba, which provides additional details on the administration’s Cuba policy. Among other things, the policy directive describes the administration’s vision for U.S.-Cuba normalization, discusses progress on normalization since December 2014, describes the strategic landscape with respect to Cuba, establishes medium-term objectives for the U.S.-Cuba relationship, and describes the roles and responsibilities of U.S. agencies in implementing the policy. Figure 3 provides a timeline of key events since the policy change in December 2014. A number of U.S. agencies are involved in the implementation of the administration’s Cuba policy. Treasury’s Office of Foreign Assets Control (OFAC) administers the CACR, and Commerce’s Bureau of Industry and Security (BIS) administers the EAR. Among other things, OFAC and BIS are responsible for licensing transactions authorized by the regulations. The Department of State (State) is responsible for establishing foreign policy related to Cuba, leading diplomatic engagement with the Cuban government, implementing certain democracy assistance programs, and promoting educational and cultural exchanges. Other U.S. agencies, including the U.S. Department of Agriculture (USDA), the U.S. Agency for International Development (USAID), and the U.S. Trade Representative (USTR), are also involved in certain activities related to the implementation of the administration’s Cuba policy. USDA communicates with U.S. agricultural producers involved in trade with Cuba and coordinates with the Cuban government on agricultural issues of mutual interest. USAID is charged with implementing certain democracy assistance programs in Cuba. USTR serves as an advisor to other U.S. agencies on issues related to trade with Cuba and engages with the Cuban government in multilateral forums, such as the World Trade Organization. In addition, the USITC has reported on issues related to the U.S. embargo on Cuba and trade between the two countries, including in a March 2016 report. The Cuban private sector has grown rapidly since 2008 but remains small compared with other economies and faces various constraints. Although it continues to control most of the economy, the Cuban government has undertaken several reforms in recent years that have created opportunities for Cubans to engage in additional private sector activity. Currently, the Cuban private sector has three primary components: (1) self-employed entrepreneurs such as restaurant owners and taxi drivers, (2) agricultural cooperatives and other private farmers, and (3) nonagricultural cooperatives involved in activities such as construction and financial services. Cuban government data indicate that the authorized private sector has grown rapidly, with 29 percent of the Cuban labor force in the private sector in 2015 compared to 17 percent in 2008. Although the percentage of the Cuban workforce in the private sector has grown, it is still smaller than in comparable countries, according to our analysis of International Labour Organization (ILO) data. In addition, the Cuban private sector is still highly constrained by the Cuban government and faces challenges including a lack of access to needed inputs. Although the majority of the economy continues to be controlled by the state, the Cuban government has undertaken several reforms in recent years that have created opportunities for Cubans to engage in additional private sector activity. Many reforms have taken place since 2008 when Raúl Castro was elected as head of the state by the Cuban National Assembly. Among other things, these reforms have been driven by the Cuban government’s stated goal of reducing the number of workers on the state payroll by 1.8 million. The Cuban government has also set the goal of increasing the private sector’s contribution to GDP from approximately 5 percent in 2011 to between 40 and 45 percent by 2017, according to State. Despite the economic reforms taken, the Cuban government remains ambivalent about the private sector, according to U.S. officials and Cuba experts. For example, U.S. officials and experts we interviewed noted that the Cuban government remains wary of allowing the accumulation of wealth among its citizens and thus wants to limit the ability of any one business to grow too large or become too financially successful. Cuban embassy officials we interviewed stated that the Cuban government believes that some private sector activity is necessary to improve the efficiency of the Cuban economy; however, the officials also noted that the government remains committed to its socialist economic model and that key sectors of the economy will remain state owned. The Cuban private sector currently includes three primary components that are authorized by the Cuban government: (1) self-employed entrepreneurs known as cuentapropistas, (2) agricultural cooperatives and private farmers, and (3) nonagricultural cooperatives. Figure 4 shows examples of private sector activity in Cuba. Cuentapropistas: Cuba has authorized 201 categories of legal self- employment for individuals. However, many of these categories are highly specific (e.g., “Flower Wreath Arranger” and “Children’s Ride Operator”), and most white collar professions, such as engineers or lawyers, are not among the authorized categories. There were approximately half a million licensed cuentapropistas at the end of 2015, according to Cuban government data. According to one analysis, about 80 percent of licensed cuentapropistas operate their own businesses, while the remaining 20 percent are contract workers for other cuentapropistas. Common cuentapropista activities include restaurants, bed and breakfasts, transportation (see fig. 5), and construction. Certain forms of self- employment have been authorized in Cuba since 1993; however, the Cuban government has increased the number of authorized categories in recent years. For example, in 2013, the Cuban government increased the number of authorized categories from 181 to the current 201. Agricultural cooperatives and private farmers: There were approximately 5,200 agricultural cooperatives operating as of the end of 2015, according to State. There are three types of agricultural cooperatives in Cuba. All three types of cooperatives are considered part of the private sector; however, their ownership structure and relationship to the state differ. Credit and Services Cooperatives (Cooperativas de Créditos y Servicios): These cooperatives, first formed shortly after the Castro regime came to power, provide credit and other services to the cooperative members and are comprised of independent farmers who individually own and farm their land. Agricultural Production Cooperatives (Cooperativas de Producción Agropecuaria): First formed in the 1970s, these cooperatives involve the nonreversible sale of land and equipment by private farmers to the cooperative in exchange for a membership/ownership stake in the cooperative. Basic Units of Cooperative Production (Unidades Básicas de Producción Cooperativa): First formed in the 1990s, these cooperatives were formed from farms previously run by the state. Under this cooperative arrangement, the state owns the land; however, the farmers lease the land from the state, and the cooperative members control production on the farms. There are also some independent private farmers in Cuba who are not associated with a cooperative. According to Cuban government data, the amount of agricultural land held by agricultural cooperatives and private farmers was 70 percent in 2015. Nonagricultural cooperatives: First approved in 2013, 367 nonagricultural cooperatives were operating as of the end of 2015, according to State. Some nonagricultural cooperatives are “self-initiated” while others were formed as a result of the Cuban government’s decision to privatize state companies. According to one analysis, approximately 75 percent of nonagricultural cooperatives are former state-owned enterprises. Unlike with cuentapropistas, there is no list of permitted occupations for nonagricultural cooperatives. To date, the majority of nonagricultural cooperatives are involved in service, rather than production activities. The Cuban government has approved nonagricultural cooperatives in construction, transportation, financial services, and automotive repair, among other areas. Cuba experts and analyses of the Cuban private sector differed as to whether joint ventures, which are partnerships between foreign companies and state enterprises in Cuba, should be considered part of the private sector. According to State, joint ventures in Cuba operate in a limited number of sectors such as hotels, tourism, and mining. Some analyses included joint ventures as part of the private sector; however, some U.S. government officials and experts we interviewed, stated that joint ventures should not be considered part of the private sector given the nature of joint venture arrangements in Cuba. Joint ventures are generally required to be majority owned by Cuban state-owned enterprises. In addition, foreign firms that enter into joint ventures in Cuba cannot directly hire Cuban workers and must instead go through a Cuban government staffing agency. These staffing agencies generally take a significant portion of the salary that the foreign firm pays to the workers. In addition to the authorized private sector, there is evidence of significant informal private sector activity in Cuba. For example, El Paquete, an electronic bundling of news and entertainment content, is by some estimates the largest private sector business on the island. Although it is not an authorized form of private sector activity, it is well-known and tolerated by the Cuban government, according to U.S. officials and Cuba experts. Even private sector businesses that are legally authorized may operate in legal grey areas. For example, owners of legal, private restaurants must frequently resort to the black market to get necessary supplies. There are also legal businesses that stretch the terms of their license. For example, one Cuba expert we interviewed noted that some cell phone repair shops will also offer other services, such as loading off- line applications onto cell phones. U.S. government and international institution data on the size of the Cuban private sector are limited. In addition, the methodologies the Cuban government uses to produce data on various measures of the Cuban economy are not readily transparent, and certain key data are not publicly reported. For example, the Cuban government does not publish data on the private sector’s share of Cuba’s GDP. However, the Cuban government does report data on the share of the Cuban workforce in the private sector. Experts we interviewed generally consider official Cuban labor market data to be reliable, and these data have been used in a range of analyses that we reviewed. According to our analysis of Cuban labor force data, the Cuban private sector has grown rapidly since 2008. For example, as shown in figure 6, the number of licensed cuentapropista workers grew from 142,000 in 2008 to approximately 500,000 in 2015, according to Cuban government data. As shown in figure 7, Cuban government data indicate that as of 2015, approximately 29 percent of the Cuban workers were employed in the private sector, an increase of approximately 12 percentage points from 2008. Some analyses we reviewed and experts we interviewed indicated that at least some of the growth of the Cuban private sector in recent years has been driven by the formalization of previously unauthorized or informal private sector activity, rather than the creation of new employment. Although the size of the Cuban private sector has grown since 2008, there are some indications that this growth has slowed recently. For example, according to reporting by the Economist Intelligence Unit, Cuba’s Ministry of Labor and Social Security stated that the number of licensed cuentapropistas declined in the second half of 2015 after hitting a high in May 2015. However, the Economist Intelligence Unit reported that the number of licenses subsequently recovered to previous levels in the first quarter of 2016. In addition, U.S. officials and experts we interviewed noted that the Cuban government appears to have slowed down or even stopped the approval of new nonagricultural cooperatives. The Cuban government’s plans to shift some employment to the private sector have also not been fully realized. For example, in 2012, the Cuban government announced plans to privatize almost 13,000 state-run eateries and personal service providers; however, as of 2015, only 108 of these businesses had become operational in the private sector, according to State officials. More recently, in October 2016, news outlets reported that the Havana provincial government announced that it was temporarily halting approvals of new licenses for private restaurants in Havana. The U.S. government has not independently analyzed the size and scope of the Cuban private sector; however, some academic studies have developed estimates of the size of the Cuban private sector. These estimates generally found that 25 to 35 percent of the Cuban workforce is in the private sector. There were some differences in the scope of private sector activity included in these estimates. For example, some estimates included joint venture employees in their estimates, while others did not. Some estimates sought to quantify the number of Cubans who work part time or unofficially within the private sector; others did not. For example, one analysis estimated that there are likely between 400,000 to 800,000 Cuban government workers who earn significant private income to supplement their government salaries. Although increasing, the percentage of Cubans working in the private sector remains small compared to other countries. The ILO maintains a database that collects information on employment by institutional sector— public or private—from national statistical agencies. Based on our analysis of these data, the share of Cuba’s workforce in the private sector is smaller than in all 16 other countries—for which data were available— that fall into the same World Bank income category. The percentage of Cuba’s workforce in the private sector was approximately 28 percent in 2014 compared to a median of approximately 83 percent for the other 16 countries (see fig. 8). Most countries in this income group had a substantially larger share of employment in the private sector. Of the 16 comparable countries that we reviewed, Belarus had the next lowest percentage of its workforce in the private sector after Cuba, with 61 percent of its workforce in the private sector. The percentage of the Cuban workforce in the private sector was also smaller than Vietnam, the only other communist country for which the ILO had data. As of 2014, approximately 89 percent of Vietnam’s workforce was in the private sector. Although Cuban government reforms in recent years have created additional space for private sector activity, various sources noted that legal private sector activity is still highly constrained and circumscribed by the state. For example, in the 2016 Index of Economic Freedom, compiled by The Heritage Foundation and The Wall Street Journal, Cuba ranked 177 out of the 178 countries assessed, with only North Korea rated as less economically free. As described previously, the Cuban government authorizes private sector activity only in certain prescribed areas, with many sectors of the economy exclusively reserved for state- owned enterprises or government ministries. Even within authorized areas, the Cuban private sector faces many limits on its operations and an array of challenges, given the nature of the Cuban economic system. Some experts on Cuba have thus questioned the extent to which it is accurate to consider Cuba’s private sector as being truly private. Based on analyses we reviewed and interviews we conducted with U.S. officials, Cuba experts, and representatives of the Cuban private sector, we identified a number of challenges that the Cuban private sector faces. These include the following, among others: Lack of access to inputs. According to State and Cuban private sector representatives, the Cuban government monopolizes the country’s wholesale distribution system and significantly limits the Cuban private sector’s access to it. As a result, the Cuban private sector must frequently purchase inputs in retail rather than wholesale markets. In addition, the Cuban private sector frequently experiences challenges obtaining needed inputs for their operations or finds that the inputs they are able to obtain through official distribution channels are not of sufficient quality. These challenges extend to a range of supplies and equipment. For example, Cuban private restaurant owners have difficulty obtaining a range of items, from fresh produce, to coffee makers, to lighting fixtures. One cooperative representative we interviewed stated that private construction workers face challenges obtaining building materials and construction equipment. Inability to directly import or export. The Cuban government controls all imports and exports in the country. Thus, the Cuban private sector does not have the ability to directly import or export items. For example, one representative from the Cuban private sector we interviewed noted that her company must go through a specific state trading agency to obtain needed inputs for her business. She said that her overseas suppliers have at times backed out of transactions due to this state trading agency’s delays in arranging procurements. Given these restrictions, many in the private sector travel abroad themselves or rely on other individuals to bring back needed items. Limited access to financing. The Cuban private sector is also challenged by its limited ability to access financing. According to State and USITC reporting and other analyses of the Cuban economy we reviewed, the Cuban government does not allow private banking, and the Cuban private sector does not have access to capital markets to raise funds. As part of reforms announced in 2011, the Cuban government authorized state banks to make loans to private businesses; however, these loans are capped at $400. In addition, the Cuban government prohibits foreign investors from extending loans to private Cuban businesses. Limited legal protection and other legal uncertainties. The Cuban private sector also faces challenges due to limited legal protections and other legal uncertainties. For example, U.S. officials, Cuba experts, and Cuban private sector representatives we interviewed noted that cuentapropistas do not have legal status as companies and are treated as individuals under Cuban law, even if they have multiple employees. The Cuban government also considers nonagricultural cooperatives to be experimental, with approvals for new nonagricultural cooperatives taking place at the highest levels of the state. Unfavorable tax structure. According to various sources, the Cuban government maintains an onerous tax system that discourages private sector investment and hiring. For example, while cuentapropistas are allowed to hire workers, their tax rates increase significantly when hiring more than five workers, which creates disincentives for business growth, according to State reporting and other sources. In addition, Cuba’s tax system does not allow cuentapropistas to itemize their expenses and instead sets standard deductions depending on the job category, according to U.S. officials and Cuba experts we interviewed. Lack of access to the Internet and other technology. The Cuban private sector is also challenged by limited access to the Internet, slow Internet connection speeds, and other technology issues. For example, the Brookings Institution has reported that fewer than 5 percent of Cubans have regular access to the Internet. Infrastructure and other capacity issues. Cuba also faces a range of infrastructure and capacity issues that affect the private sector. For example, Cuba has recently implemented power cuts as electricity demand across the island has surged and fuel supplies from Venezuela have reportedly declined. In addition, Cuba lacks refrigerated storage space and refrigerated trucks, which results in frequent food spoilage. Despite these challenges, various analyses of the private sector that we reviewed and experts and U.S. officials we interviewed noted that many Cubans in the private sector have successfully created dynamic businesses that have flourished. For example, officials from one nonagricultural cooperative reported that earnings for workers had gone from approximately $100 a month to $2,000 a month after transitioning from a state-owned enterprise to a cooperative. U.S. regulatory changes have created some new opportunities in Cuba, but economic engagement is still limited. Since December 2014, the U.S. government has made six sets of regulatory changes to ease restrictions on travel, remittances, financial services, and trade with Cuba. These regulatory changes have generated interest among U.S. businesses, and some new commercial activities have occurred, particularly related to tourism. However, a relatively limited number of commercial deals have been completed, and U.S. exports to Cuba have continued to decline. The changes in regulations on travel and remittances are also expected to benefit the Cuban private sector through increased remittances and purchases from U.S. visitors, among other things. The remaining embargo restrictions and Cuban government barriers limit additional economic engagement between the two countries. The U.S. government has made a series of regulatory changes to the CACR and the EAR since the administration announced its new Cuba policy in December 2014. Treasury and Commerce issued the first set of changes in January 2015 followed by additional changes in June and July 2015, September 2015, January 2016, March 2016, and October 2016. These regulatory changes have eased restrictions on travel, remittances, financial services, and trade with Cuba and have also allowed for some limited forms of investment by U.S. companies in Cuba. Key changes include the following: Travel. Treasury has expanded the scope of travel that is allowed under some of the 12 categories of travel authorized by TSRA and has amended the regulations to allow U.S. travelers to use a general license, which requires no advance approval, for all 12 travel categories rather than having to apply for a specific license from Treasury prior to travel. For example, under the revised regulations, U.S. travelers may now travel to Cuba for people-to-people educational activities under a general license and without having to travel under the auspices of an organization that sponsors and organizes such programs. In addition, U.S. travelers may now travel under a general license to provide certain types of training to the Cuban private sector. Remittances. Treasury has made regulatory changes to the CACR that removed caps on remittances to Cuban nationals that had been previously set at $500 a quarter. There are now no limits on the amount of remittances given as a donation that can be sent to Cuban nationals. Financial services. Treasury has made a number of revisions to the CACR related to financial services. For example, Treasury has modified the CACR to remove financing restrictions on most types of exports and has modified the definition of “cash in advance,” a requirement for exportation of agricultural products, from “cash before shipment” to “cash before transfer of title and control.” In addition, Treasury has allowed credit and debit cards issued by U.S. banks to be used in Cuba. Treasury has also modified the regulations to allow U.S. banking institutions to open and maintain bank accounts in the United States for Cuban nationals in Cuba to use for authorized transactions. Trade. Commerce has revised the EAR to create a new “Support for the Cuban People” export license exception that authorizes exports (1) to improve living conditions and support independent economic activity in Cuba, (2) to strengthen civil society (3) to improve the free flow of information among, and with, the Cuban people, and (4) of items sold directly to individuals in Cuba for their personal use or their immediate family’s personal use. In addition, Commerce has made regulatory changes to broaden existing license exceptions available for Cuba. For example, Commerce modified the license exception “Consumer Communications Devices” to allow for the commercial sale, lease, or loan of authorized items; previously, items covered by the exception could only be donated. Commerce has also modified its licensing policy to allow for the general approval of export licenses to Cuba for certain items, including telecommunications items and items that will support environmental protection. As part of the regulatory revisions, the U.S. government has also developed a list of items that are allowed for import into the United States from Cuba, if produced by independent Cuban entrepreneurs. According to various agency officials, the U.S. government has sought to refine the regulations over time to reflect input from U.S. businesses and to reflect the realities of the Cuban system. For example, Treasury officials stated that as part of the third set of regulatory revisions, Treasury modified the CACR to allow U.S. businesses to procure legal services in Cuba after hearing from the U.S. business community that to successfully operate in Cuba they needed to be able to obtain such services. U.S. regulatory changes have generated a significant amount of interest and exploratory work among U.S. businesses. An official at the U.S. embassy in Havana noted that the number of U.S. companies participating in a major trade fair in Cuba more than doubled from 2014 to 2015. There have also been several state and local U.S. trade missions to Cuba. For example, the Governor of New York led a trade mission to Cuba in April 2015 that included more than a dozen business leaders from the state. The U.S.-Cuba Trade and Economic Council reported that, as of August 2016, more than 500 senior-level representatives of U.S. companies had visited Cuba since the President’s policy announcement in December 2014. Despite the interest among U.S. businesses, U.S. officials, representatives of business associations, and Cuba experts noted that a relatively limited number of new commercial deals have been completed since December 2014. According to U.S. government and other reporting, many of the successful deals completed to date involve activities related to Cuba’s tourism industry. Starwood Hotels signed an agreement to manage three hotels in Cuba in March 2016. Carnival Corporation began offering cruises to Cuba in May 2016. A number of U.S. airlines will be providing regularly scheduled commercial flights to Havana and nine other cities in Cuba. JetBlue completed the first such flight, from Fort Lauderdale, Florida, to Santa Clara, Cuba, in August 2016. Airbnb, a U.S.-based company that allows users to list and book accommodations, began operating in Cuba in April 2015 and has over 4,000 listings. The U.S. government has also highlighted the fact that a number of U.S. telecommunications companies have signed roaming agreements with Cuba’s telecom operator as a key development since the administration’s policy change in December 2014. With these agreements in place, travelers with U.S. cellular providers can now access roaming voice and data services in Cuba. The U.S. government has also reported that some other U.S. businesses have taken steps to pursue new opportunities created by the regulatory changes. The Western Union Company, a U.S.-based payments services firm, announced plans to offer global remittance services to Cuba. Additionally, Stonegate Bank has begun to issue MasterCards, which are the first U.S. debit card to be used in Cuba. Despite the loosening of some embargo restrictions, agency officials and U.S. business representatives stated that the regulatory changes have not created significant new opportunities for agricultural exports, which are the vast majority of U.S. exports to Cuba and have been authorized since the passage of TSRA in 2000. Driven by declining agricultural exports, U.S. trade with Cuba has decreased since the regulatory changes. U.S. exports of goods and services to Cuba declined from $299 million in 2014 to $180 million in 2015. As shown in figure 9, this decline is a continuation of a longer-term downward trend from a high of $712 million in U.S. exports to Cuba in 2008. Over time, Cuba has increasingly shifted its agricultural purchases to the European Union and other countries such as Brazil, Argentina, and Canada. According to U.S. government analyses and officials, these countries are able to offer more favorable credit terms than U.S. producers given embargo-related restrictions on U.S. credit financing of agricultural exports. In addition, U.S. officials noted that the Cuban government is potentially making a political decision to purchase fewer U.S. exports to push for additional U.S. legal and regulatory changes. While overall exports have declined, U.S businesses have conducted trade using some of the new authorities provided by the regulatory changes. For example, trade data from the U.S. Census indicate that approximately $800,000 worth of goods had been exported to Cuba under the Support the Cuban People license exception from January 2015, when the license was created, to March 2016. However, until the most recent changes in October 2016, the regulatory revisions had involved a relatively narrow set of U.S. goods and services that could be newly exported. U.S. officials acknowledged that this has contributed to a relatively limited number of new commercial deals completed since the administration’s policy change. With the October 2016 regulatory revisions, U.S. exporters are now generally authorized to sell a variety of consumer goods online or directly through other means to individual Cubans for their personal use or the use of their immediate family. Even in those areas where the regulatory changes have created new trade opportunities, Commerce officials anticipate that there will likely be lags before any increases in exports are fully realized because exporters need time to become familiar with the new regulations, identify potential trading partners in Cuba, and arrange deals. Commerce export licensing data indicate that there has been an increase in the dollar value of U.S. goods approved for export to Cuba since the administration’s policy change in December 2014. For example, Commerce approved $2.1 billion worth of licenses for exports to Cuba in fiscal year 2015, compared to $1 billion in fiscal year 2014. As of June 30, 2016, Commerce had already approved $2 billion in export licenses to Cuba in fiscal year 2016. Export license applications can serve as an indicator of U.S. businesses’ interest in exporting to Cuba, but not all items licensed for export will ultimately be exported, according to Commerce officials. Authoritative data on travelers from the United States to Cuba are not available; however, Cuban government data indicate that U.S. visitors to Cuba increased by 77 percent from 2014 to 2015. The increase in U.S. visitors to Cuba is expected to benefit the Cuban private sector as it is concentrated in the tourism sector and will affect private restaurants (see fig. 10), bed and breakfasts, and taxi services. For example, Airbnb has reported that more than 13,000 Americans booked rooms in private Cuban homes in April 2015 through March 2016. These private homes, known as casas particulares, operate as cuentapropistas. Similarly, authoritative data on remittances from the United States to Cuba are also not available; however, estimates produced by outside groups suggest that remittances increased in 2015 as the U.S. government first increased and then eliminated caps on remittances to Cuba in January 2015 and September 2015, respectively. For example, estimates produced by the Havana Consulting Group indicate that remittances to Cuba were almost $3.4 billion in 2015, up 7 percent from 2014. A different estimate, produced by the Inter-American Dialogue, suggests that remittances were lower but still increased in 2015. The Inter-American Dialogue estimated that remittances to Cuba were over $1.3 billion in 2015, a 5-percent increase from 2014. Increases in remittances are expected to benefit the Cuban private sector. As part of the regulatory changes, Treasury specifically allowed remittances to be sent to Cuba to support the development of private businesses. Experts and officials we interviewed, as well as analyses we reviewed, indicate that remittances are an important source of capital for the private sector in Cuba. Other U.S. regulatory changes were also intended to benefit the Cuban private sector. For example, as discussed above, the U.S. government has developed a list of items that are allowed for import into the United States from Cuba, if produced by independent Cuban entrepreneurs. U.S. officials noted that they do not have data on the extent to which such imports have taken place, but U.S. officials we interviewed stated that such imports were likely limited to date. One Cuban private sector representative we spoke with noted that, using this authority, her company has made arrangements to sell certain products onboard Carnival Cruise Line ships traveling to Cuba. In addition, in June 2016, Nespresso announced that it would be importing coffee grown by private Cuban farmers into the United States. While the administration has used the President’s executive authority to make the six rounds of regulatory revisions since December 2014, Congress has not made statutory changes to modify or end the embargo. In addition, the Cuban government has not taken steps, specified in current U.S. law, that would authorize the President to suspend or terminate the embargo without congressional action. Pursuant to the LIBERTAD Act, the President is authorized to suspend the embargo if he or she determines that a transition Cuban government is in power and is required to terminate the embargo if he or she determines that a democratically elected Cuban government is in power. Consequently, U.S. law still limits U.S. businesses’ ability to engage with the Cuban private sector or pursue other economic opportunities in Cuba. For example, as part of its implementation of the embargo, Commerce maintains a general policy of denial on most exports other than those covered by license exceptions or otherwise specifically identified in the EAR. In addition, most U.S. investment in Cuba continues to be prohibited under the CACR. U.S. law also places a number of other restrictions on U.S. citizens. For example, although U.S. businesses may now offer financing on most types of goods authorized for export to Cuba, credit financing of agricultural exports to Cuba remains prohibited under TSRA. Various sources note that this prohibition significantly limits the competitiveness of U.S. agricultural producers given the generous credit terms other countries, such as Vietnam, provide to Cuba. In addition, although the CACR authorizes transactions incident to travel to Cuba for 12 specified categories, U.S. law still prohibits travel for tourist activities. Cuban government restrictions also affect U.S. businesses’ ability to engage with the Cuban private sector or pursue other economic opportunities in Cuba. The effects of a number of the regulatory changes cannot be fully realized until the Cuban government makes corresponding changes. U.S. officials stated that it has been challenging to get the Cuban government to agree to make such changes. One key Cuban government restriction that U.S. officials identified is that all U.S. exports must go through one of Cuba’s state trading agencies. For example, according to U.S. government reporting and officials, all U.S. agricultural exports must go through the Cuban state trading agency Alimport. As a result, U.S. businesses cannot trade directly with the Cuban private sector or other state-run companies. In addition, while the January 2015 regulatory revisions allowed for microfinancing to support the growth of the Cuban private sector, the Cuban government continues to prohibit the private sector’s access to such financing from foreign investors, according to State reporting. The Cuban government also has not granted approvals to U.S. companies seeking to pursue opportunities in Cuba. Cleber, a U.S.-based manufacturer of small farm and light agricultural equipment, received approval from the U.S. government to set up a tractor assembly facility in Cuba. However, after more than a year of negotiation, the Cuban government rejected Cleber’s proposal. U.S. agencies have conducted a range of activities to support U.S. economic engagement with Cuba’s private sector but have limited information on the effects of their efforts. While U.S. regulatory changes have created opportunities for greater economic engagement with Cuba, prohibitions on U.S. assistance, resource constraints, and Cuban government priorities affect U.S. agencies’ ability to support U.S. businesses or engage the Cuban private sector. Within these limitations, a number of U.S. agencies have engaged with the Cuban government, U.S. businesses, and the Cuban private sector to enhance understanding of the regulatory changes and increase opportunities for economic engagement. However, agencies have not taken steps to collect and document key information that will enable them to monitor changes in economic engagement resulting from the President’s initiative. U.S. law prohibits many forms of assistance that might be able to increase economic engagement with Cuba. In particular, agencies may not provide export assistance or credit or guarantees for exports to Cuba, and foreign assistance for Cuba is subject to legislative restrictions that prohibit most types of assistance other than democracy assistance. In other countries, U.S. agencies can increase economic engagement through a range of programs that are prohibited with respect to Cuba, such as the ones noted below: Commerce, through its U.S. Commercial Service, offers a range of export assistance services to U.S. businesses seeking to enter markets in other countries. These services include—among others— basic market research, organizing trade missions, and coordinating one-on-one matchmaking meetings with potential business contacts. USDA partners with agricultural trade associations, cooperatives, and other groups to share the costs of overseas marketing to develop commercial export markets for U.S. agricultural goods. Several finance agencies, including the Export-Import Bank of the United States and the Overseas Private Investment Corporation, provide credit and other support to U.S. companies doing business overseas. As part of foreign assistance efforts, State and USAID conduct economic capacity-building projects and develop financing solutions to support private enterprise in developing countries. State officials noted that resource constraints limit the ability of the U.S. embassy in Havana to support the Cuban private sector and U.S. economic engagement with Cuba. Embassy officials noted that their workload has increased substantially while their level of staffing has remained the same since the embassy opened in July 2015, after the restoration of diplomatic relations (see fig. 11 below). In particular, they now coordinate a large volume of official visits to Cuba. State officials said that limited staff and financial resources have affected the embassy’s ability to provide economic reporting and travel outside of Havana to observe private sector activity. The embassy operates with 51 U.S. direct- hire staff. According to State officials, this is approximately one-third of the staff at embassies in similarly sized countries, such as the Dominican Republic., The U.S. and Cuban governments have agreed to increase caps on the number of permanent positions at their respective embassies to 76 direct-hire staff. However, the embassy has not received funding for additional staff. State officials said that the embassy’s level of resources affects its ability to accommodate the presence of other U.S agencies that could provide expertise and support on economic diplomacy. The Department of Homeland Security is currently the only other agency with permanent staff at the embassy. Since the embassy reopened, State has not accepted requests from other U.S. agencies for permanent positions in Havana. In November 2015, USDA submitted a request to State to establish an office at the embassy. USDA stated that a permanent in-country presence at the embassy was necessary for the agency to continue to advance relations with the Cuban Ministry of Agriculture and gain firsthand knowledge of Cuba’s agricultural challenges and opportunities. However, State did not approve the request, citing the embassy’s full workload, aging physical infrastructure with a lack of available workspace, inadequate housing, and the lack of administrative resources needed to support other agencies. In lieu of a permanent presence, USDA places one rotating staff member on temporary assignment at the embassy. The Cuban government’s priorities also affect the ability of U.S. agencies to support the Cuban private sector. Agency officials noted that the Cuban government’s priorities are not aligned with U.S. government objectives in most cases. For example, the United States made regulatory changes to allow U.S. firms to export certain categories of items to the Cuban private sector, but the Cuban government has not authorized the private sector to import items directly. Even though the Cuban government has authorized increased private sector activity in some areas, its priority is to increase investment and economic opportunities for state-owned enterprises, according to State officials. State officials said that the Cuban government has opposed U.S. government efforts to target the Cuban private sector as the beneficiary of regulatory changes or other initiatives. They stated that the Cuban government declined a proposal to establish a working group on small business. As an alternative to creating specific programs for Cuban entrepreneurs, which the Cuban government may object to, State officials said that they have promoted existing programs that are available to applicants from several countries. For example, State’s Women’s Entrepreneurship in the Americas program provides training and mentoring to women throughout Latin America and the Caribbean. Since the President’s December 2014 Cuba policy announcement, a number of U.S. agencies have conducted activities to increase economic engagement with the Cuban private sector and expand U.S. economic opportunities in Cuba (see fig. 12). Agencies have engaged (1) the Cuban government, (2) U.S. businesses and other organizations, and (3) the Cuban private sector. More recently, the President’s October 2016 presidential policy directive has provided additional guidance to further specify the roles and responsibilities of the various U.S. agencies involved in the implementation of the administration’s Cuba policy. Engagement with the Cuban government: U.S. agencies’ key activities have included high-level diplomatic events; several rounds of technical discussions; and memoranda of understanding (MOU) covering a range of issues. In addition to a presidential visit and six cabinet-level visits to Cuba, there have been three key forums for U.S. engagement with the Cuban government on economic issues: (1) the Bilateral Commission, (2) the U.S.-Cuba Regulatory Dialogue, and (3) the Economic Dialogue. State has conducted five rounds of the Bilateral Commission with Cuba’s Ministry of Foreign Affairs. The commission has covered a range of issues, including cooperation on human rights, regulatory issues, agriculture, telecommunications, and civil aviation. State officials said that this is a key forum for establishing a framework for engagement between the two countries. Commerce, Treasury, and State have conducted three rounds of the U.S.-Cuba Regulatory Dialogue. The purpose of the dialogue is to increase understanding of the economic systems of both countries. U.S. officials have also used the dialogues to encourage the Cuban government to make corresponding changes to maximize the effect of U.S. regulatory revisions. Commerce and State held the inaugural session of the Economic Dialogue with the Cuban government in September 2016 to discuss long-term bilateral engagement on a variety of economic issues. State officials said the two governments agreed to follow up with technical working group meetings on three issues, in particular: (1) renewable energy and energy efficiency, (2) intellectual property rights, and (3) economic cooperation. The U.S. and Cuban governments have also signed six MOUs, including one signed by the Secretary of Agriculture and the Cuban Agriculture Minister on technical cooperation on agriculture and forestry issues. As a result of the MOU, USDA has developed plans to conduct technical exchanges on a variety of topics, including organic food production and plant and animal health. Other MOUs covered issues including public heath, marine protected areas, civil aviation, trade and travel security, and maritime navigation. Outreach with U.S. businesses and other organizations: U.S. agencies’ key activities have included providing written guidance on the regulatory changes; conducting informational events and conference calls; and responding to individual inquiries from businesses, trade associations, universities, and other organizations interested in Cuba. Since the first set of regulatory changes was announced in January 2015, Treasury and Commerce have published comprehensive fact sheets and answers to frequently asked questions to clarify the changes. To further explain the changes and respond to questions about the regulations, Treasury and Commerce officials have participated in a number of events with U.S. businesses. Commerce hosted or participated in the following events with U.S. businesses between January 2015 and June 2016: 14 conference calls hosted by Commerce with more than 1,700 9 calls hosted by other agencies with more than 2,500 92 meetings with individual companies with more than 380 29 trade association events with more than 1,260 participants. Treasury officials stated that, in addition to participating in most of the events hosted by Commerce, they had conducted several events specific to financial institutions. Treasury officials also highlighted two conferences on economic sanctions they hosted that included 750 to 1,000 participants each; they also participated in two banking seminars—in Havana and New York—that included representatives from banks in Cuba, the United States, and other countries. Agency officials also reported that they have responded to a high volume of inquiries from businesses, trade associations, universities, and other organizations that are seeking additional information on the regulatory changes. For example, Commerce officials stated that they have responded to almost daily inquiries regarding the regulatory changes since the new Cuba policy was announced in December 2014. Embassy officials stated that they interact frequently with numerous U.S. businesses exploring the Cuban market through meetings, briefings, and explanations of the changing Cuban environment. USDA has produced reports on U.S. agricultural trade with Cuba and collected general information from its program partners regarding their business activities with Cuba. USDA announced in March 2016 that its program partners may use industry funds sourced, for example, through its Research and Promotion programs to conduct cooperative research and information exchanges with Cuba; however, among other restrictions, USDA officials said that program partners may not use funds to offer training in Cuba. Outreach with the Cuban private sector: U.S. agencies’ key activities have included meeting with Cuban entrepreneurs, arranging informational meetings for U.S. business delegations, promoting training opportunities, and hosting formal events during official visits to Cuba. Embassy officials noted that they learned about the challenges Cuban entrepreneurs face by visiting their places of business and conducting regular outreach. According to embassy officials, they regularly coordinate meetings between U.S. business delegations seeking to learn about Cuba’s private sector and Cuban entrepreneurs. Several Cuban private sector representatives said they had traveled to the United States to take business classes sponsored by U.S. universities or other organizations, which they had learned about through interactions with U.S. officials. They noted that the embassy had provided information regarding these opportunities and facilitated their communication with U.S. universities or other organizations. According to the owners of one private business, the business administration and marketing skills gained through training at a U.S. university led to improvements in their business operations. U.S. agency officials have met with Cuban private sector representatives during official visits. For example, the President participated in an entrepreneurship and opportunity event during his visit to Havana in which Cuban entrepreneurs discussed their experiences and the challenges they face. During the third U.S.-Cuba Regulatory Dialogue in Havana in July 2016, Treasury and Commerce officials responded to questions about U.S. regulatory changes during a meeting with Cuban entrepreneurs. Treasury and Commerce officials noted that Cuban entrepreneurs were interested in how they could pay for U.S. goods and how Cuban entrepreneurs could receive payment for services provided to U.S. firms. State has also created opportunities for a small number of entrepreneurs through its public diplomacy programs, such as the ones noted below. Eight Cuban entrepreneurs were selected to participate in its Young Leaders of the Americas Initiative in fall 2016, which involved internships and skills-building workshops for young leaders from Latin America and the Caribbean, among other activities. The embassy’s Public Affairs Section highlighted entrepreneurship as one of three top priorities included in its grant opportunities for individuals and organizations to conduct programs in Cuba pertaining to the arts, academia, sports, entrepreneurship, technology, education, and youth. State noted that the embassy had received three applications related to entrepreneurship. Embassy officials explained that one challenge is finding a qualified implementing partner that is not connected to the Cuban government, which is a requirement of the grants. State and other agencies have not taken steps to collect and document key information that would enable them to monitor changes in economic engagement, including with the Cuban private sector, resulting from the President’s initiative. Although agencies have communicated frequently with each other regarding their activities, they have not collected or documented key information on (1) the Cuban economy, (2) the effects of U.S. regulatory changes, and (3) agency activities. GAO’s Standards for Internal Control in the Federal Government states that agencies should use quality information to evaluate performance in achieving key objectives. Without collecting and documenting information, U.S. agencies risk being unable to monitor and assess changes in economic engagement since the December 2014 policy change. This information could also support agencies’ implementation of the President’s October 2016 directive. The directive stated that the National Security Council will convene interagency meetings to monitor implementation and resolve obstacles to progress on the administration’s goals and objectives. Collecting and documenting key sources of information may be difficult because of the opacity of the Cuban economy and the lack of typical or authoritative sources of information on private economic activity but will nonetheless be critical to understanding changes in U.S.-Cuba economic engagement. Information on the Cuban economy: Agencies have not collected and documented information on the Cuban economy that they would need to monitor changes in U.S.-Cuba economic engagement. Agency officials noted a lack of transparency in how the Cuban government produces data and that there is limited information on the Cuban economy, in part, because Cuba is not a member of—and thus not reporting data to—international financial institutions. As a result, agency officials gather information through meetings with Cuban officials, Cuban entrepreneurs, and Cuba experts; and through monitoring written sources. However, agencies generally have not documented and reported the information they have learned. In a proposal submitted to its Diplomacy Lab—a partnership with researchers from U.S. universities—State noted the need to collect baseline information on the Cuban economy to be able to measure the impact of the easing of bilateral relations. The baseline study would have involved analyzing and assessing the validity of existing data sources on the Cuban economy. However, according to State, the proposal did not generate sufficient interest from the Diplomacy Lab to move forward with the study. In addition, the embassy has not documented key information related to the Cuban economy that could benefit other agencies. While recognizing resource constraints at the embassy, U.S. officials from two agencies said that they would benefit from additional economic reporting from the embassy given the general lack of quality information on Cuba’s economy. For example, Commerce officials noted that having more knowledge initially regarding Cuba’s system for importing and exporting could have been informative to earlier rounds of regulatory changes. USDA officials noted that they would benefit from additional information on Cuban rules and regulations related to agriculture. Effects of U.S. regulatory changes: Agencies have not collected data on key activities related to the regulatory changes. State has noted the importance of increased travel and remittances to the U.S. government’s goal of supporting Cuba’s private sector. However, as a result of changes to general licensing for authorized categories of travel, Treasury has limited information regarding U.S. travel to Cuba. Similarly, since caps on remittances have been lifted, Treasury has limited information on U.S. remittances to Cuba. Alternative sources of data on travelers and remittances exist, but U.S. agencies have not determined what additional information, if any, would be required to monitor these regulatory changes or assessed whether existing sources of information would be adequate. State officials highlighted the potential benefits for the Cuban private sector of the regulatory changes allowing U.S. entities to import goods and services from independent Cuban entrepreneurs. However, State does not have data or documentation on the extent to which goods or services produced by Cuban entrepreneurs have been imported into the United States using this authority but noted anecdotal evidence suggesting that the authority has been used. Even though information regarding the economic effects of the regulatory changes is limited, Treasury officials stated that the easing of restrictions on travel and commerce was a goal of the President that was achieved through the regulatory changes. State and Treasury officials also said that some agencies that might otherwise monitor the economic effects of the regulatory changes have not been as involved because of the legal prohibitions on export assistance. Information on agency activities: Agencies generally have not documented the results of key forums for U.S.-Cuba economic relations. For example, agencies have not produced comprehensive summaries of the Bilateral Commission meetings or the Economic Dialogue. Although Commerce has summarized key points and next steps from their meetings during the U.S.-Cuba Regulatory Dialogue, these documents summarized the proceedings for internal department purposes. Officials stated that relevant updates are communicated through interagency meetings chaired by the National Security Council and frequent informal communication. However, officials could benefit from additional documentation of key meetings and events. For example, officials from the embassy said that they are not informed regarding some details of discussions when bilateral meetings occur in Washington, D.C. USDA officials also noted that on some occasions they have been asked to provide input in preparation for bilateral events but were not briefed afterward regarding the results of the events. In addition, agencies have not documented key takeaways or challenges identified as a result of their outreach with U.S. businesses and the Cuban private sector. For example, Treasury and Commerce have not documented information they have learned from U.S. businesses and financial institutions during outreach events. Officials from both agencies noted that they have used information gained from events to make decisions about regulatory changes and shared information with relevant stakeholders during interagency meetings. However, without written documentation of information learned, stakeholders may not be able to monitor the effectiveness of the outreach efforts. For example, they may not be able to determine whether U.S. banking institutions are aware that they may process third-country commercial transactions related to Cuba. Similarly, embassy officials noted that outreach with the Cuban private sector has been a top priority since the restoration of diplomatic relations, but they have not documented information learned from these efforts. Without documentation, it may be unclear whether Cuban entrepreneurs have been able to take advantage of U.S. regulatory changes—for example—whether they have been able to open bank accounts in the United States to receive payment for products or services provided to U.S. clients. The embassy developed an Integrated Country Strategy (ICS) in March 2016 that establishes goals for engagement with the Cuban private sector and identifies indicators for monitoring progress toward these goals. These indicators involve a range of measures related to the Cuban economy and trade. However, embassy officials said that they have not yet started to collect the information needed to assess progress using these indicators. In addition, for several of the indicators, the embassy may need to work with other U.S. agencies to obtain the necessary information; however, officials from other agencies we interviewed stated that they had not been consulted regarding the ICS. President Obama’s December 2014 policy announcement marked a significant shift in U.S. policy toward Cuba. After more than 50 years of U.S. policy designed to isolate Cuba’s communist government, the President called for a new strategy of engagement. In particular, in setting out his policy, the President stated that the U.S. government would seek to support Cuba’s nascent private sector and create new openings for U.S. businesses to engage in Cuba. Since the policy change, agencies have conducted a range of activities to support increased economic engagement. However, there are still significant limits on such engagement. U.S. law prohibits many forms of assistance to Cuba that might be used to support the Cuban private sector or U.S. businesses that want to pursue economic opportunities in Cuba. In addition, agency officials stated that resource constraints at the U.S. embassy in Havana and Cuban government priorities affect the ability of the U.S. government to support the Cuban private sector. U.S. agencies have noted the importance of having quality information to support their efforts, particularly given the opacity of the Cuban economy and a lack of authoritative data sources. However, U.S. agencies have not taken steps to collect or document key information that could be used to monitor changes in economic engagement, including with the private sector, and address obstacles to progress. Without taking steps to collect and document information, agencies are hampered in their efforts to target their activities, assist future administrations in making decisions regarding Cuba, and inform congressional debate related to the embargo. To ensure that all relevant U.S. agencies have information on the effect of changes in U.S. policy related to Cuba, we recommend that the Secretary of State, in consultation with Commerce, Treasury, USDA, and other relevant agencies, take steps to identify and begin to collect the information that would allow them to monitor changes in economic engagement, including with the Cuban private sector. We provided a draft of the report to Commerce, State, Treasury, USITC, USAID, USDA, and USTR for review and comment. Commerce, State, Treasury, and USAID provided technical comments, which we incorporated as appropriate. State also provided written comments, which are reproduced in appendix II. In its written comments, State concurred with our recommendation. USITC, USDA, and USTR did not provide any comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees and the Secretaries of Agriculture, Commerce, State, and Treasury, as well as the USAID Administrator, the Chairman of the U.S. International Trade Commission, and the United States Trade Representative. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix III. The objectives of this review were to examine what is known about (1) the size and scope of the Cuban private sector, (2) the effect of changes to U.S. legal and regulatory restrictions related to Cuba on the Cuban private sector and U.S. businesses, and (3) the extent to which the U.S. government planned and implemented activities designed to increase U.S. engagement with the Cuban private sector and expand U.S. economic opportunities in Cuba. To determine what is known about the Cuban private sector, we analyzed U.S. government documentation describing the Cuban private sector. In addition, we conducted interviews with U.S. officials from the Departments of Agriculture (USDA), Commerce (Commerce), State (State), and the Treasury (Treasury); as well as the U.S. Agency for International Development (USAID), the U.S. International Trade Commission (USITC), and the U.S. Trade Representative (USTR) to learn more about the size and composition of the Cuban private sector, how it has changed over time, and key challenges it faces. We also conducted a literature review to identify relevant studies on the Cuban private sector completed by academics, think tanks, and other relevant organizations. To further assess the size of the Cuban private sector and how it has changed over time, we analyzed labor force data for 2008 through 2015 published by the Cuban government’s National Statistics Office (Oficina Nacional de Estadísticas de Cuba). To assess the reliability of Cuban employment data, we interviewed U.S. officials and Cuba experts who were familiar with the data. In addition, we examined how the data had been used in a number of other assessments of the Cuban economy, including any relevant limitations that these assessments identified. We determined that the data were sufficiently reliable for the purposes of this report. Using International Labour Organization (ILO) data on employment by institutional sector that it has collected from national statistical agencies, we also compared the relative size of Cuba’s public and private sectors to 16 other countries that fell into the same World Bank income category. To assess the reliability of ILO data, we obtained written responses from the ILO regarding its process for compiling data, how it defined employment categories, and other relevant information. We determined that the data were sufficiently reliable for the purposes of this report. To gather further information on the Cuban private sector, we conducted fieldwork in Havana, Cuba, in July 2016. During our fieldwork, we interviewed officials from the U.S. embassy in Havana, representatives from the Cuban private sector, representatives from Cuban organizations that provide training and other support to the Cuban private sector, and representatives from selected foreign embassies that have established commercial ties with Cuba. In addition, we conducted observations of a range of private Cuban businesses. As part of our fieldwork, we requested meetings with several Cuban ministries and academics at the University of Havana; however, the Cuban government denied our request for all of these meetings. After the completion of our trip, we were able to meet with officials from the Cuban embassy in Washington, D.C. In July 2016, we also attended the annual conference of the Association for the Study of the Cuban Economy in Miami, Florida, to obtain further information from a range of U.S. and Cuban experts on the status of the Cuban economy, recent and planned economic reforms in Cuba, and the size and composition of the Cuban private sector. The information on foreign law in this report is not the product of GAO’s original analysis, but is derived from interviews and secondary sources. To determine what is known about the effect of changes to U.S. legal and regulatory restrictions related to Cuba on the Cuban private sector and U.S. businesses, we reviewed relevant statutes related to the U.S. embargo on Cuba, including the Trading with the Enemy Act of 1917, the Foreign Assistance Act of 1961, the Cuba Democracy Act of 1992, the Cuban Liberty and Democratic Solidarity (LIBERTAD) Act of 1996, and the Trade Sanctions Reform and Export Enhancement Act of 2000. We also reviewed the two principal sets of regulations pertaining to the embargo: the Cuban Assets Control Regulations (CACR) and the Export Administration Regulations (EAR). In doing so, we assessed what changes U.S. agencies have made to the CACR and the EAR since December 2014, when the Obama administration announced the new U.S. policy on Cuba. To obtain further information on the six sets of regulatory changes that U.S. agencies have made since December 2014, we reviewed documents produced by Commerce and Treasury discussing the regulatory changes, including fact sheets, frequently asked questions documents, and briefings to U.S. companies. In addition, we analyzed USITC, USDA, and other U.S. government reports and documentation to gather information on the results of the regulatory changes since December 2014 and on how remaining statutory and regulatory restrictions affect U.S. businesses’ ability to engage with the Cuban private sector or pursue other economic opportunities in Cuba. To gain further information on the changes to the CACR and the EAR, we interviewed U.S. officials from Commerce, State, Treasury, USDA, USITC, and USTR, as well as officials from the embassy. We also interviewed a nongeneralizable sample of U.S. business association officials, nonfederal Cuba experts, and Cuban private sector representatives to obtain additional perspectives on the effects of the U.S. regulatory changes and how remaining U.S. and Cuban restrictions affect economic engagement between the two countries. In addition, we analyzed data on trade between the United States and Cuba from Commerce’s Trade Policy Information System. To assess the reliability of these data, we reviewed Commerce documentation on the Trade Policy Information System, an interface for accessing U.S. Census data on U.S. imports and exports, and prior GAO work using U.S. Census data. We determined that the data were sufficiently reliable for the purposes of this report. Finally, we analyzed Commerce licensing data to assess any trends in license applications to pursue authorized transactions related to Cuba under the EAR. To assess the reliability of these data, we interviewed Commerce officials and reviewed Commerce documentation on its licensing database. We determined that the data were sufficiently reliable for the purposes of this report. To determine the extent to which the U.S. government has planned and implemented activities to increase U.S. engagement with the Cuban private sector and expand U.S. economic opportunities in Cuba, we interviewed headquarters officials from State, Commerce, Treasury, USDA, USAID, and USTR. We also interviewed officials from the embassy and USDA’s Caribbean Basin Agricultural Trade Office in Miami, Florida. We discussed with officials activities they have conducted related to Cuba, their interpretations of key laws related to the embargo, and challenges they experience in conducting activities. We also interviewed a nongeneralizable sample of U.S. business association officials, Cuba experts, and Cuban private sector representatives to obtain their perspectives on U.S. activities related to Cuba. In assessing agencies’ efforts to monitor changes in economic engagement, including with the Cuban private sector, we compared their actions with GAO’s Standards for Internal Control in the Federal Government. Principle 13 of these standards states that agencies should use quality information to achieve their objectives. To examine the extent to which agencies have collected and documented information, we submitted questions and received written responses regarding agencies’ activities related to Cuba and the extent to which, if at all, they had documented their engagement with the Cuban government, U.S. businesses, and the Cuban private sector. We reviewed the embassy’s Integrated Country Strategy for Cuba and discussed with State officials the extent to which the embassy received input from other agencies regarding their plan. We also reviewed publicly available agency documents summarizing the results of the changes in the administration’s Cuba policy. We conducted this performance audit from January 2016 to December 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Adam Cowles (Assistant Director), Ryan Vaughan (Analyst-in-Charge), Justin Gordinas, Michael Hoffman, Mark Dowling, Lynn Cothern, Jill Lacey, Neil Doherty, and Lilia Chaidez made key contributions to this report.
In the more than 50 years since it established an embargo on Cuba, the U.S. government has pursued a policy designed to isolate Cuba's communist regime. In December 2014, the President announced a significant change in U.S. policy. Since then, the U.S. government has restored diplomatic relations with Cuba and modified some aspects of the U.S. embargo. The Cuban government has also implemented economic reforms in recent years to allow for certain private sector activity. While much of Cuba's economy is still state-controlled and the U.S. embargo on Cuba remains in place, developments in recent years have created new opportunities for U.S. economic engagement with Cuba. This report examines what is known about (1) the size and scope of the Cuban private sector, (2) the effect of changes to U.S. legal and regulatory restrictions on the Cuban private sector and U.S. businesses, and (3) the extent to which the U.S. government has planned and implemented activities to increase U.S. engagement with the Cuban private sector and expand U.S. economic opportunities in Cuba. GAO analyzed U.S. government and other assessments of the Cuban private sector, analyzed Cuban government data, interviewed U.S. federal and nonfederal Cuba experts, and conducted fieldwork in Cuba. The Cuban private sector has grown rapidly since 2008 but remains small compared with other economies and faces various constraints. The Cuban private sector includes (1) self-employed entrepreneurs, (2) agricultural cooperatives and other private farmers, and (3) nonagricultural cooperatives. Cuban government data indicate that the percentage of the Cuban workforce in the private sector has grown from 17 percent in 2008 to 29 percent in 2015. However, the Cuban private sector is smaller than in 16 comparable countries GAO analyzed. It is also still highly constrained by the Cuban government and faces challenges, including a lack of access to needed supplies and equipment. U.S. regulatory changes have created some new opportunities in Cuba, but economic engagement is still limited. The U.S. government has made six sets of regulatory changes since December 2014 to ease restrictions on travel, remittances, financial services, and trade with Cuba. For example, the Department of Commerce created a new export license exemption to facilitate U.S. exports that support the Cuban people, including the private sector. The regulatory changes have generated U.S. business interest; however, relatively few commercial deals have been completed. In addition, U.S. trade with Cuba has decreased, driven by declining agricultural exports, which have been legal since 2000. Changes in remittance and travel regulations are expected to benefit the Cuban private sector through increased capital and purchases from U.S. visitors. Although the regulatory changes have created some new opportunities for U.S. businesses and the Cuban private sector, embargo restrictions and Cuban government barriers continue to limit U.S.-Cuba economic engagement. U.S. agencies have conducted a range of activities to support U.S. policy changes; however, embargo restrictions, resource constraints, and Cuban government priorities affect their ability to support U.S. businesses or engage the Cuban private sector. Within these limitations, the Department of State (State) and other U.S. agencies have engaged with the Cuban government, U.S. businesses, and the Cuban private sector. Among other things, they have established memoranda of understanding with the Cuban government, hosted events with Cuban entrepreneurs, and promoted training opportunities. However, U.S. agencies have not collected and documented key information on the Cuban economy, the effects of regulatory changes, and agency activities, in accordance with federal standards for internal control. Without collecting and documenting information, agencies risk being unable to monitor and assess changes over time in economic engagement with Cuba, including with the private sector. GAO recommends that State, in consultation with key agencies, take steps to identify and collect information to monitor changes in economic engagement resulting from the shift in U.S. policy. State concurred with the recommendation.
The Army’s vision for the 21st century mandates a land force that can operate in joint, combined, and multinational formations to perform a variety of missions, ranging from humanitarian assistance and disaster relief to major theater wars. The Army’s vision also requires that it be capable of putting a combat force anywhere in the world within 96 hours. To meet these objectives, the Army states that it must transform into a more deployable and strategically responsive force. This transformation process also dictates that the Army reengineers its logistics processes to increase responsiveness to its combat units and to provide the spare parts needed to maintain equipment readiness. In recent years, Congress has provided increased operations and maintenance funding for DOD to enable military units to purchase spare parts from the supply system as needed. For example, during fiscal years 1999-2002, Congress provided supplemental funding totaling $1.5 billion, of which the Army received $170 million in 1999, $25 million in 2001, and $200 million in 2002 to address spare parts shortages that were adversely affecting readiness. The Army now projects that it will spend over $7 billion during fiscal years 2003-05 to purchase spare parts for its combat and support systems. The Army Chief of Staff’s list of programs that need more funding indicates that the Army needs an additional $415 million to sustain the forces in fiscal year 2003 and $263 million to sustain them in fiscal year 2004 and according to an Army official, to support operations Enduring Freedom and Iraqi Freedom. A portion of these amounts would be used to purchase spare parts, but the Army did not provide a breakout of how the funds will be allocated. In July 2001, we reported that spare parts shortages in the Army were adversely affecting operations, maintenance, and personnel. For example, we reported that safety concerns and the lack of spare parts in 1999 prevented the Chinook and Apache helicopters from meeting their mission-capable goals. To compensate for the lack of spare parts, maintenance personnel used parts cannibalized from other equipment, an inefficient practice that doubles the time needed for a single maintenance effort. We also reported that the Army had major initiatives under way to improve the availability of spare parts as part of an overall strategy to revolutionize its logistics processes. The initiatives included improving demand forecasts for spare parts, increasing the visibility and access to spare parts Armywide, and reducing the time it takes to receive parts after they have been ordered. At that time, we did not assess the extent to which the initiatives might mitigate spare parts shortages. DOD is also concerned about the adverse impact that spare parts shortages have on the readiness of weapon systems. In an August 2002 report on its inventory management practices, DOD stated a desire to improve supply management accountability by linking investments in spare parts to readiness results in order to ensure that resources are focused on optimal readiness gains. DOD noted that the models it uses to determine inventory purchases are generally biased toward the purchase of low-cost items with high demands instead of the items that would improve readiness the most. The report recommended that the services improve their ability to make inventory investment decisions based on weapon system readiness. It also recommended that the services’ requests for funds to increase inventory investments be justified based on the corresponding increase in weapon system readiness. The Army’s current strategic plan provides strategic goals, objectives, milestones, and performance measures for force transformation efforts. However, it does not address how the service expects to mitigate critical spare parts shortages that degrade equipment readiness. As shown in figure 1, the Army published two plans during 2000 that were subsumed into a single plan in April 2001. These plans provided guidance for transforming the Army’s logistics to support forces that will be more agile and responsive. The Army’s Strategic Logistics Plan, published in May 2000, was designed to implement the guidance in the Army Chief of Staff’s vision for its forces in the 21st century. This plan outlined the major logistical requirements for achieving a joint, combined, or multinational force that can be used for a variety of missions, ranging from humanitarian assistance to major theater wars. For example, a major goal of the plan was to achieve total asset visibility, which was intended to give inventory managers information on the location, quantity, condition, and movement of parts worldwide. Total asset visibility would therefore allow managers to access and redistribute parts in the Army’s inventory to meet immediate spare parts requirements. In March 2000, DOD issued the Defense Reform Initiative 54, which required each military service to submit an annual logistics transformation plan. The Army’s effort was published in July 2000 as the Army Logistics Transformation Plan. The purpose of this plan was to document, on an annual basis, the planned actions and related resources for implementing the Army Strategic Logistics Plan. Generally, the logistics transformation plan outlined the interrelated activities necessary to support DOD’s four intermediate objectives: (1) establish customer wait time as a supply performance measure; (2) adopt a priority system that provides assets to the commander by the required delivery date; (3) achieve accurate total asset visibility of existing spare parts; and (4) field a Web-based system that provides seamless, interoperable, real-time logistics information. In April 2001, the Army published its Transformation Campaign Plan, an all-encompassing document that serves as a mechanism for integrating and synchronizing the necessary actions to move the Army from its present posture to a future force that will be more strategically deployable and responsive. The plan contains specific goals and objectives to provide logistical support to deploy and sustain its forces across a full spectrum of operations, and it incorporates the criteria for an effective strategy contained in GPRA. Furthermore, according to Army officials, the Army monitors the progress of its efforts to ensure that logistics decisions, goals, and milestones complement and support the entire transformation progress. For example, one strategic goal contained in the plan requires the Army to be able to deploy a combat brigade in 96 hours. The plan dictates that the Army measures its ability to deploy combat brigades by employing major decision points at which senior leaders will evaluate progress and decide whether adjustments need to be made to the original combat brigade deployment strategy. However, there are no such strategic goals, objectives, or performance measures in this Army plan relating to monitoring and resolving critical spare parts shortages. As shown in table 1, the plan contains 14 lines of operation—or broad responsibilities—that describe closely related activities designed to meet specific transformation objectives by established milestones. Logistics requirements are addressed by line 9 in the plan, “Deploying and Sustaining the Force.” Specifically, this line of operation addresses how to transform Army support elements to make the service more strategically responsive and reduce the cost for logistics without reducing war-fighting capability. The Army’s key logistics initiatives were designed to improve internal business processes, but not specifically mitigate critical spare parts shortages. Its ongoing six servicewide initiatives are primarily focused on improving logistics business processes in the areas of (1) procurement and repair of spare parts, (2) inventory management, and (3) supply operations thereby improving supply availability. However, we could not determine the extent to which they have reduced critical spare parts shortages. The Army recently started a separate, non-Armywide readiness enhancement initiative that includes an effort to mitigate critical spare parts shortages. The Army’s six major initiatives are expected to improve overall logistical support for its units by focusing on improving logistics processes in order to be more responsive and effective in meeting customer needs. Table 2 summarizes the Army’s initiatives by focus area along with the expected improvements to logistics operations. The Army’s Partnership, Recapitalization, and National Maintenance Program initiatives are intended to improve the parts supply process, reduce demand through modernization of major weapon systems, and provide uniform repair standards. The expected improvements are being measured in a variety of ways, but none measure or track increases in supply availability and readiness rates. Without such measures, we could not determine the extent to which the initiatives have significantly reduced critical spare parts shortages. The Army is forming partnerships with manufacturers to provide spare parts and technical assistance directly to the applicable maintenance depot in order to improve depot-level repair of selected weapon systems and to improve the depot’s performance in supplying repaired parts. The Army has formed partnership agreements with General Electric Aircraft Engines, Sikorsky Aircraft Corporation, Boeing, Parker-Hannifin, Honeywell, Rolls Royce, and Bell Helicopters. Some of these companies have agreed to provide spare parts and technical assistance directly to the Corpus Christi Army Depot, where depot-level repair is performed for the Apache and Chinook helicopters. According to an Army official, these agreements are beneficial for the Army as well as the industry partners. The Army improves repair operations and saves money by obtaining hard-to-get, sole-source parts and technical assistance for a negotiated cost, and the industry partner is able to keep production lines open by relying on steady demands from the Army. The Army official said that the partnership initiatives have resulted in significant improvements to its depot repair operation. For example, the average elapsed time before the engine in the Apache and Blackhawk helicopters would fail has improved from about 400 hours to about 1,140 hours. Moreover, the repair-cycle time for components in the partnership program has decreased from 360 to 95 days, thereby decreasing the demand for spare parts by providing units with more reliable equipment and achieving more efficient supply performance. The Army’s Recapitalization Program is expected to return 17 selected legacy weapon systems to like-new condition by rebuilding and upgrading them at maintenance depots over time as funds become available. Specifically, the Recapitalization Program is intended to (1) extend the service life of the equipment; (2) reduce operating and support costs; (3) improve reliability, maintainability, safety, and efficiency; and (4) enhance capabilities. The Army began recapitalizing a limited number of the weapon systems in fiscal year 2002, with full-scale operation beginning in fiscal year 2003 (see app. I for a list of systems). In fiscal year 2003, the Army fully funded the initial spare parts requirements of the Recapitalization Program, investing at least $419.7 million of its operations and maintenance funding to run the program. An Army official said that about $200 million was taken from the Recapitalization Program to help with the Iraq war, but the program will be reimbursed from the supplemental appropriation. According to Army officials, recapitalizing Army weapon systems will initially increase the demand for spare parts because new parts will be used for equipment that is cycled through the rebuilding and upgrading process. However, in the long term, the like-new equipment should be more reliable and the demand for spare parts should decrease. The National Maintenance Program is expected to establish, by fiscal year 2005, a single national standard for the repair of equipment components and spare parts. The program’s overhaul standard is generally higher than the variety of standards held by individual repair units, and consists of restoring components and spare parts to a nearly like-new condition. This condition includes the restoration of the part’s original appearance, performance, and life expectancy. The National Maintenance Program is intended to help sustain the weapon systems that have undergone overhauls and rebuilds through the Army’s Recapitalization Program. In fiscal years 2001 and 2002, the Army obligated $70 million and $16 million, respectively, for the development of maintenance standards and program support. The Army has completed overhaul standards for 521 items and is expected to complete standards for the remaining 272 items by fiscal year 2005. The expected benefit of the National Maintenance Program is that a single higher repair standard for components and spare parts will enhance weapon system readiness and reduce the demand for spare parts. The Army is improving inventory management through its Single Stock Fund and Logistics Modernization Program initiatives, which are intended to provide better visibility over spare parts in the inventory, improved spare parts requirements determination, and an enhanced inventory distribution process. Like the procurement and repair initiatives discussed above, these initiatives do not measure progress in reducing critical spare parts shortages that impact readiness. In response to a recommendation in our 1990 report, the Army approved a business process reengineering initiative called the Single Stock Fund in November 1997. The Single Stock Fund is aimed at improving inventory management by (1) providing worldwide visibility and access to spare parts down to the installation level, (2) consolidating separate national and installation level inventories into a single system, and (3) integrating logistics automated information systems and financial automated information systems. The Single Stock Fund streamlines and where needed, eliminates multiple financial transactions that have previously caused numerous inefficiencies in duplicate automated legacy systems. The visibility of worldwide supply items allows managers to calculate worldwide spare parts requirements and increases the volume of inventory that is available for redistribution to meet priority readiness requirements. For example, the Secretary of the Army testified in 2003 before the Senate Armed Services Committee that from May 2000 through November 2002, the Single Stock Fund made it possible to redistribute inventory valued at $758 million. He further stated that the Single Stock Fund reduced customer wait time by an average of 18.5 percent. The Logistics Modernization Program is aimed at improving inventory management by modernizing the Army’s 30-year-old national and retail logistics automated business processes and practices. The Logistics Modernization Program is intended to provide an automated system with real-time capabilities for managing wholesale and retail inventories by modernizing and integrating about 30 legacy logistics databases. The program includes about 47 new forecasting methodologies to enable managers to better forecast demands for spare parts. The Logistics Modernization Program’s integrated automated systems should reduce supply-cycle time and provide managers with the ability to better support customers by tracking spare parts requisitions from the time the requisition is submitted until the customer receives the part. Moreover, the program is to work in tandem with the Single Stock Fund to provide worldwide visibility of supply assets in real time. The Army Materiel Command plans to roll out the Logistics Modernization Program over the next several years, with the first phase of implementation scheduled in early 2003. The program’s measures of success include reducing supply-cycle time, but not supply availability and equipment readiness. The Army is also trying to improve its supply operations and reduce the time it takes to deliver spare parts to customers through the Distribution Management initiative. Distribution Management is an Armywide initiative established in 1995 to improve supply operations by developing a faster, more flexible, and efficient logistics pipeline. The initiative’s overall goal is to eliminate the unnecessary steps in the logistics pipeline that delay the flow of parts through the supply system. Distribution Management currently uses two teams—the Distribution Process Improvement Team and the Repair Cycle Process Improvement Team—to monitor progress and spearhead continuous improvements within their respective areas of responsibility. However, the extent to which supply availability has been improved is not clear because neither team tracks this as measures of success. The Distribution Process Improvement Team promotes initiatives to improve the Army’s inventory distribution processes, including customer response, inventory planning, warehouse management, transportation, and supply. For example, the team initiated dollar-cost banding, a new stock determination algorithm that has improved inventory performance. Traditionally, Army units have used a “one-size-fits-all” approach for determining whether or not to stock a particular spare part. Consequently, an item not currently stocked would need nine requests in the prior year to be stocked on the shelf, regardless of its criticality to equipment readiness. This criterion was applied equally to a 10-cent screw and to a $500,000 tank engine. The dollar-cost banding approach, however, allowed inventory managers to stock a mission-critical item with only three requests, rather than nine. The Army has credited this concept with decreasing customer wait time and increasing equipment readiness. The Repair Cycle Process Improvement Team strives to improve the Army’s maintenance processes through such initiatives as the equipment downtime analyzer, a computer system that links supply and maintenance performance to equipment readiness. The analyzer examines equipment maintenance operations and the supply system to identify problem areas as well as the functions that are working well in the maintenance process. This capability enables managers to quickly diagnose the root of the problems and to develop solutions to help maximize the future effectiveness of the maintenance process. For example, in one case, the apparent reason for a tank not being mission ready for 18 days was that the maintenance personnel were waiting for the supply system to provide a part. The equipment downtime analyzer revealed the following: (1) because the supply system initially provided the wrong part, a second part had to be ordered; (2) because maintenance personnel did not initially realize that the part was needed, a third part was ordered late; and (3) maintenance personnel finally decided, on day 18, to stop waiting for the part to be delivered by the supply system and took action to obtain it from another tank that was not mission ready in order to complete the maintenance process. Although the Army is generally meeting or exceeding it overall supply performance goal of having parts available 85 percent of the time when they are requested, the Army continues to experience critical spare parts shortages that affect equipment readiness. For example, in a July 2001 report on Army spare parts shortages, we identified 90 components or assemblies for the Apache, Blackhawk, and Chinook helicopters for which the Army was experiencing critical spare parts shortages. The Army began a new initiative, separate and apart of the Armywide initiatives, to take management action on individual critical spare parts shortages. However, because it is not a part of the Armywide initiatives, it is not clear how it will be effectively integrated with them to maximize mitigating critical spare parts shortages and improve readiness. The new Army initiative to address spare parts shortages that are most essential to equipment readiness, entitled the “Top 25 Readiness Drivers,” began in October 2002. For each of its 18 major combat systems, the Army, on an ongoing basis, has been identifying the top 25 components or spare parts that are key to the systems’ readiness. Of the total 450 spare parts, the Army had identified as critical to equipment readiness in February 2003, 291 or 65 percent of the parts were stocked below the required level. Twenty-nine percent or 132 of these parts were in the Army’s lowest inventory category—those for which there is less than 1½ month supply. Major commands report the inventory status of these spare parts to the Army Materiel Command, who in turn presents a consolidated report to the Army Deputy Chief of Staff for Logistics every 2 weeks. A review group headed by the Deputy Chief of Staff for Logistics initiates possible actions that can be taken to mitigate the most severe spare parts shortages among the top spare parts or components. This new Army initiative is a movement in the right direction to address critical spare parts shortages; however, it remains unclear the extent to which this initiative will mitigate critical spare parts shortages and improve equipment readiness. The initiative’s effectiveness may be limited because its efforts and results are not linked to or coordinated with the goals and metrics of the Army’s other initiatives as part of an overall approach to mitigating critical spare parts shortages in the future. While the Army has the means to link funding to a corresponding level of readiness and reports this information in budget justification documents (see app. II), it does not report how additional funding requests for spare parts might impact readiness to decisionmakers such as Congress. The Office of the Secretary of Defense has recommended that the services provide such information when requesting additional funds in the future. The Army has reported that its models correlate the impact of investments in spare parts on supply availability. However, because of various other factors such as maintenance capacity and training requirements that affect equipment status, the models can only estimate the impact of the additional investment on weapon system readiness. The Army Materiel Systems Analysis Activity uses the Supply Performance Analyzer Model and the Selected Essential-Item Stockage for Availability Method Model to determine the investment needed to reach a weapon system’s desired supply availability rate. Information from these models has been supplied to individual units to assist in inventory investment decisions. In addition, the Army used an outside consultant to analyze the impact additional investment in spare parts would have on readiness. For example, to support a briefing to the Army Vice Chief of Staff in March 2001, the Logistics Management Institute completed an analysis for the Army showing that an additional $331 million for spare parts would increase the mission-capable rate for the Apache and Blackhawk helicopters by 2.6 percent. According to Army officials, the correlation between additional investments in spare parts and readiness is not exact because other factors such as maintenance capacity and training requirements impact readiness. Despite having the means to determine how additional funding might affect readiness, the Army does not provide such analyses to Congress as part of its funding requests. For example, in the justification for the fiscal year 2002 budget, the Army requested and received $250 million to purchase additional spare parts. Moreover, the Army sent correspondence to the House Committee on Armed Services showing that an additional $675 million was needed for spare parts during fiscal year 2002. However, in neither case did the Army provide analysis to Congress showing how the additional funding might affect readiness. The June 2002 Financial Management Regulations provided a template for reporting the funds to be spent on spare parts by weapon system as part of the budget submission. The benefit of reporting such a link was cited in an August 2002 Office of the Secretary of Defense study that recommended that future requests for additional funds to increase spare parts inventories be justified in budget documents submitted to Congress based on the corresponding increase in weapon systems readiness. The Army’s Transformation Campaign Plan serves as a mechanism to transform the Army’s forces from its present posture to a more strategically deployable and responsive force. The plan prescribes specific goals and milestones to support this transformation process, but it lacks specific focus on mitigating spare parts shortages. In addition, the Armywide initiatives to improve the procurement and repair of spare parts, inventory management, and supply operations do not focus on mitigating critical spare parts shortages. Without a strategy or Armywide initiatives focused on the mitigation of critical spare parts shortages and their impacts on equipment readiness, the Army cannot ensure that it has appropriately addressed shortages in those parts that would give them the greatest readiness return. Furthermore, while some of the Army’s logistics initiatives might increase the availability of spare parts in general, the lack of specific and effective measures of performance will limit the Army’s ability to ascertain progress in mitigating spare parts shortages that are critical to equipment readiness. Finally, the Army has the means to determine how funding might impact parts availability and equipment readiness as part of its stewardship and accountability for funds, but has not provided this information to Congress when it requests additional funding. Without such information that links additional spare parts funding to readiness and provides assurance that investments are based on the greatest readiness returns, Congress cannot determine how best to prioritize and allocate future funding. We recommend that the Secretary of Defense direct the Secretary of the Army to modify or supplement the Transformation Campaign Plan, or the Armywide logistics initiatives to include a focus on mitigating critical spare parts shortages with goals, objectives, milestones, and quantifiable performance measures, such as supply availability and readiness related outcomes and implement the Office of Secretary of Defense recommendation to report, as part of budget requests, the impact of additional spare parts funding on equipment readiness with specific milestones for completion. In written comments on a draft of this report, DOD generally concurred with the intent of both recommendations, but not the specific actions we recommended. DOD’s written comments are reprinted in their entirety in appendix III. In concurring with the intent of our first recommendation, DOD expressed concern that because spare parts shortages are a symptom of imperfect supply management processes, its improvement plans must focus on improving these processes rather than on the symptoms. According to DOD, the Army’s Transformation Campaign Plan correctly focuses on transforming the Army’s forces and equipment from its present posture to a more strategically deployable and responsive objective force. Furthermore, DOD also stated that the Armywide logistics initiatives correctly focus on improving procurement, repair of spare parts, inventory management, and supply operations. DOD also noted it has/is taking several actions. The “Top 25 Readiness Drivers” initiative, which addresses specific stock numbers that affect its major weapon systems, has been added to the metrics in the Army’s Strategic Readiness System. Milestones for logistics initiatives would be added to the Army’s Transformation Campaign Plan. Also, spares shortages will be tracked in the Strategic Readiness Systems and logistics initiatives will be tracked in the Transformation Campaign Plan. Therefore, DOD does not agree that the Army needs to modify its Transformation Campaign Plan or the Armywide logistics initiatives to focus on spare parts shortages. We do not believe that these actions alone are sufficient to meet our recommendation. We endorse the Army’s efforts to add related metrics to its Strategic Readiness System and milestones for its logistics initiatives to the Transformation Campaign Plan. Further, our report recognizes that the Army’s plan focuses on improving the Army’s force transformation efforts and that improving logistics processes is part of the solution to mitigating spare parts shortages. However, the intent of our recommendation was for the Army to include in its Transformation Campaign Plan or servicewide initiatives a focus on mitigating critical spare parts shortages. As our report clearly points out, without a focus on mitigating critical spare parts shortages with goals, objectives, and milestones included in the strategic plan or Armywide initiatives, we believe there is increased likelihood that the Army’s progress will be limited because it efforts may be ineffective or duplicative in mitigating spare parts shortages that are critical to equipment readiness. Therefore, we believe implementation of our recommended actions is necessary to ensure improved readiness for legacy and future weapon systems. In concurring with the intent of our second recommendation, DOD stated that the Army would begin implementing the recommendation by providing mission-capable rates during the upcoming mid-year budget review consistent with the June 2002 updated budget exhibit in the Financial Management Regulation. DOD also states that the Army will fully comply with the August 2002 inventory management study reporting recommendation when the required data becomes available. We support the Army’s effort to report mission-capable rates for its weapon systems. However, we are concerned that the Army has not set a deadline for fully implementing the recommendation. Providing this valuable information to Congress in a timely manner is an important step in placing a priority on efforts needed to mitigate spare parts shortages as part of the Army’s overall stewardship of funds and accountability for making spare parts investment decisions that provide a good readiness return. We have therefore modified our second recommendation to include a provision that the Army establish milestones for fully implementing the recommendation from the August 2002 inventory management report. To determine whether the Army’s strategic plans address mitigating spare parts shortages, we obtained and analyzed Army planning documents that pertained to spare parts or logistics. We focused our analysis on whether these strategic plans addressed spare parts shortages and included the performance plan guidelines identified in GPRA. We interviewed officials in the Office of the Army Deputy Chief of Staff for Logistics, and the Army Transformation Office to clarify the content and linkage of the various strategic plans. To determine the likelihood that Army initiatives will achieve their intended results and contribute to the mitigation of spare parts shortages to improve readiness, we obtained and analyzed service documentation and prior GAO reports on major management challenges and program risks and on the Army’s major initiatives that relate to spare parts or supply support. We focused our analysis on whether the initiatives addressed spare parts shortages and the need for quantifiable and measurable performance targets as identified in GPRA. We also interviewed officials in the Supply Policy Division, Army Deputy Chief of Staff for Logistics; Army Materiel Command; Army Aviation and Missile Command; Army Tank and Automotive Command; and Combined Arms Support Command. We obtained and analyzed Army data pertaining to spare parts availability, spare parts back ordered, and specific spare parts that are affecting equipment readiness. To determine the extent to which the Army identifies how additional investments in spare parts affect supply support and readiness, we obtained and analyzed documentation on the Army’s needs for additional funding to purchase spare parts. We analyzed the Army’s budget justification for the funding needed for spare parts for the years 2004 and 2005. We obtained the results of prior analyses showing how additional funding might affect readiness. However, we did not independently validate or verify the accuracy of the Army’s models that show the relationship between funding, supply performance, and readiness. We also visited and interviewed officials at the Army Materiel Systems Analysis Activity and considered DOD’s recommendations in its August 2002 Inventory Management Report. We performed our review from August 2002 through March 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense, the Secretary of the Army, and other interested congressional committees and parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-8365 if you or your staff has any questions concerning this report. Major contributors to this report are included in appendix IV. In addition to those named above, Robert L. Coleman, Alfonso Q. Garcia, Susan K. Woodward, Robert K. Wild, Cheryl A. Weissman, Barry L. Shillito, and Charles W. Perdue also made significant contributions to this report.
Prior reports and studies have identified major risks in the Department of Defense's (DOD) management, funding, and reporting of spare parts spending programs. Spare parts shortages adversely affect the U.S. Army's operations and can compromise the readiness of weapon systems. To address these issues, Congress has fully funded DOD's requests for spare parts spending and in some instances increased funding for additional spare parts. Yet, the Army continues to experience spare parts shortages. Congress requested that GAO evaluate (1) the Army's strategic plans for reducing spare parts shortages, (2) the likelihood that key initiatives will reduce such shortages, and (3) the Army's capability to identify the impact on readiness of increased investments for spare parts. The Army's logistics strategic plan provides strategic goals, objectives, and milestones for force transformation efforts, but does not specifically address the mitigation of critical spare parts shortages. The Army's Transformation Campaign Plan, published in April 2001, serves as a mechanism to move the Army from its present posture to a more strategically deployable and responsive force. The plan prescribes specific goals and milestones to support the transformation process. However, it lacks objectives and performance measures it could use to show progress in mitigating critical spare parts shortages. The Army's six servicewide logistics initiatives are aimed at enhancing readiness by improving internal business processes that would increase supply availability. However, they were not designed to mitigate spare parts shortages. These processes include those that acquire, repair, and distribute spare parts. Recognizing that the Armywide initiatives were not designed to specifically focus on mitigating critical shortages, the Army recently started a new initiative to address individual spare parts shortages that affect key weapon systems readiness. However, this initiative is not part of the Armywide logistics improvement efforts, and therefore it is not coordinated with other initiatives and its results are not linked with the overall goals and performance measures. Absent this coordination and linkage, any systemic problems that the initiatives identifies may not be elevated to the Armywide initiatives for resolution and its benefit may be limited to improving the availability of only a few parts. The Army has the means to link funding to weapon system readiness, and reports this in its budget justification documents, but it does not report to Congress how additional investments in spare parts would increase readiness. The Army Materiel Systems Analysis Activity can use models to indicate the investment needed to reach a desired level of supply availability, along with the possible corresponding increase in readiness, and it has provided such information to Army units. Additionally, the Army has used consultants to project the impact of additional funding on the readiness of specific weapon systems and provided this to the Army Vice Chief of Staff. For example, the Logistics Management Institute projected that anadditional investment of $331 million for additional spare parts would increase the overall readiness of the Apache and Blackhawk helicopters by approximately 2.6 percent.
Over 87 million children in the United States participate in activities provided by child- and youth-serving organizations each year. Many of these organizations have formalized and structured environments, such as schools, which children are required to attend. Others are voluntary, extracurricular activities, such as clubs and sports activities. The known adult-child interactions that involve child abuse in these out-of-home settings, although a relatively small percentage of child abuse overall,have drawn public attention and generated parental alarm about the safety of children in such settings. In 1996, the Department of Justice issued a report that examines the most serious crimes against children. For example, the report provides analyses of more than 35,000 cases of child murder that occurred between 1976 and 1994. Criminal history records checks are one of several methods of predicting the suitability of individuals seeking paid or volunteer positions with organizations that interact with children. National fingerprint checks can help identify criminal histories of individuals convicted of a crime anywhere in the United States who are seeking a volunteer or paid position in any state. The checks of criminal history records for civil (noncriminal justice) purposes have long been a part of the FBI’s workload. Public Law 92-544, which was enacted in 1972 and preceded NCPA, authorizes the FBI to exchange identification records with officials of state and local governments for purposes of licensing and employment—if such exchanges are also authorized by a state statute that has been approved by the U.S. Attorney General. NCPA did not give the states any new access to national fingerprint-based background checks and did not mandate the states to pass laws. Rather, NCPA highlighted the need for such background checks and encouraged the states to pass appropriate legislation. Thus, under NCPA, background checks must be handled in accordance with the requirements of Public Law 92-544. For example, each state that wants the FBI to conduct national criminal history records checks of child care or youth service workers must have in place a law defining what categories of jobs or positions require the background checks. It is left up to each state to decide how broadly to extend the background check requirement. But, whatever the scope, there must be a state law requiring fingerprinting of the employee or volunteer and allowing the FBI to conduct criminal history background checks of persons in or applying for the specified categories of jobs or positions. Further, NCPA specifies that the criminal records search must be based upon fingerprints. Thus, each request for a criminal history background search must be accompanied by a set of 10-print fingerprint cards. These submissions must be made by (and the results returned to) a designated governmental agency, such as a state Department of Education, Department of Social Services, or a state public safety or police department. NCPA requires that these designated agencies be responsible for determining whether the provider has been convicted of, or is under pending indictment for, a crime that bears upon the provider’s fitness to have responsibility for the safety and well-being of children. However, the act does not provide a specific list of disqualifying offenses; rather, each state must make these determinations. When enacted in 1993, NCPA specified that fees collected by the FBI and authorized state agencies, respectively, for fingerprint-based background checks of volunteers with a qualified entity could not exceed the actual cost. The provision was amended in 1994 to specify that the fee for these volunteers could not exceed $18 or the actual cost, whichever is less.Also, the act specifies that the states shall ensure that fees to nonprofit entities for fingerprint-based background checks do not discourage volunteers from participating in child care programs. According to the FBI, advances in electronic communications, expanding legislative mandates, and increased sophistication of law enforcement technology are expected to double the number of all types of criminal history information requests by the end of the century. IAFIS, which has been under development since the early 1990s, is being designed to provide more efficient identification services by, among other means, eliminating the need to transport and process paper fingerprint cards. According to the FBI, 37 of the 50 states have enacted legislation authorizing use of national fingerprint-based checks of criminal history records for purposes of checking applicants for paid or volunteer positions involving interaction with children. The 5 states we selected for review are among these 37. Applicable statutes in the five states vary considerably in scope or coverage (see app. II). For example, Tennessee’s statute covers a broad range of positions or work settings involving interaction with children, while Virginia’s statutes cover only selected school districts and juvenile residential facilities. Also, some statutes cover new applicants only, while other statutes cover both current employees and new applicants. The statutes also differ regarding whether national checks are required or permitted. For example, the Florida child care-related statutes shown in appendix II require national checks, whereas Tennessee’s statutes permit but do not require national checks. Three of the five states (California, Tennessee, and Texas) have authority to request national checks of volunteers at nonprofit youth-serving organizations, such as the Boy Scouts of America and Big Brothers/Big Sisters of America. Within these states, in reference to these volunteers, the use of fingerprints to nationally check criminal history records has been limited. For example, in California, although all nonprofit youth-serving organizations have authority to request national checks, only 12 checks had been requested from January through June 1996. In Tennessee, only two such checks were requested in 1995. In Texas, four nonprofit youth-serving organizations are authorized to request national checks. From August 1, 1995, through July 17, 1996, a total of 98 national checks were requested, all by one local affiliate of the Big Brothers/Big Sisters of America. Officials at most of the nonprofit youth-serving organizations we contacted suggested several reasons why the use of national checks of volunteers has been limited. One reason suggested was that the states’ statutes permit rather than require such checks. The officials commented that the fact that state statutes permit rather than require national fingerprint-based checks of volunteers may derive from concerns about the fees for such checks. According to these officials, the use of national background checks may also have been limited because the FBI’s response or turnaround time can be weeks or months, which may be unacceptable for many organizations that use volunteers for seasonal or part-time positions. In Texas, for instance, officials at the Volunteer Center of Dallas County told us that state name-based searches generally meet their clients’ needs because the fee ($4) is reasonable and the results are available in a week or less. Thus, Center officials told us that they do not plan to push for legislation requiring national fingerprint checks. Another reason for limited fingerprint-based background checks of volunteers may be lack of authorization awareness by certain groups. For example, two of the youth-serving organizations that we contacted in California were not aware that they are allowed to request national fingerprint-based background checks. A complete check of criminal history records has both FBI and state agency components. At the time of our review, the FBI’s fee for national fingerprint-based background checks was $18 for volunteers and $24 for all others. The fee amount for volunteers equates to the FBI’s reported costs. That is, according to expenditure and workload data provided us by the FBI, the Bureau projected that actual costs would average $18 for each fingerprint-based background search of criminal history records during fiscal year 1996. Of the five states we studied, only California had recently (in 1996) calculated its actual costs ($32.62) for conducting a fingerprint-based check of state records. This reported cost figure was considerably higher than the NCPA-imposed fee cap for volunteers of $18. However, California was not charging a fee for fingerprint-based checks of volunteers at nonprofit youth-serving organizations. Florida Department of Law Enforcement officials told us that the state does not perform fingerprint-based checks of criminal history records for purposes of licensing and employment. These officials explained that the state’s computer system for fingerprint searches was not sufficient to handle such requests. Thus, our questions regarding the fees for and the actual costs of state fingerprint checks were not applicable to Florida. By state statute, Tennessee’s fee structure matches that of the FBI; thus, the state’s fee is $18 for volunteers and $24 for all others. Tennessee Bureau of Investigation officials told us that given this statutory basis for setting fees, the state has not attempted to calculate its actual costs. Texas’ fee is $15 for fingerprint checks of applicants, whether volunteers or nonvolunteers. This fee amount, according to Texas Department of Pubic Safety officials, was first established in 1990 on the basis of a study that calculated actual costs totaling $11.42 per applicant. However, the study recommended a fee of $15 to ensure that Texas was consistent with other states’ fees for similar services. Virginia’s fee is $13 for fingerprint checks of applicants, whether volunteers or nonvolunteers. Virginia Department of State Police officials told us that this fee amount has been in effect for several years and, at the time of implementation, was set to match the FBI’s then-current fee. Without knowledge of actual costs, states that charge fees cannot ensure compliance with federal law. Specifically, as amended in 1994, NCPA provides that fees collected by authorized state agencies for fingerprint-based background checks of volunteers may not exceed $18 or the actual cost, whichever is less. In the states we studied, because nonprofit youth-serving organizations had requested relatively few national fingerprint-based checks on volunteers, the applicable statutes and related fees do not appear to have negatively affected volunteerism. However, officials at the various nonprofit organizations we contacted were concerned about state and FBI fees. Many of these officials commented that the fees were too high and, thus, if state laws were changed to require fingerprint checks, the number of volunteers and/or the scope of program services probably would be reduced. On the basis of discussions with officials at various national and local nonprofit entities, we identified only two studies—completed in 1994 and 1995, respectively—that had attempted to assess the potential effects of background check fees on volunteerism. Both studies were conducted or sponsored by the Boy Scouts of America. The respondents to both studies generally endorsed the concept that adult volunteers should be required to have a background check, but the respondents also indicated that personal cost was a factor influencing their willingness to maintain their volunteer status. Due to sampling and other methodological limitations, however, neither study can be used to draw conclusions about the overall scouting volunteer population. Also, the reported results are speculative because reactions were solicited regarding fees not actually in place. A minority view was presented by officials at 2 of the 20 nonprofit organizations we contacted in the study states (see table I.1 in app. I). These officials—who represented entities located in California—commented that the current fees for national checks were reasonable and easily could be borne by applicant volunteers. Here again, however, these views are speculative because, at the time of our review, neither of the two groups had requested any national fingerprint-based background checks of volunteers. In the opinion of officials at the organizations we contacted, the authority to request national fingerprint-based checks is useful irrespective of the hit rates. These officials emphasized that although it is not quantifiable, the deterrent effect of the prospect of national background checks is significant—and, indeed, is a factor perhaps more important than any other aspect of such checks. Where applicable, for example, experienced officials told us of instances where individuals reconsidered their interest or withdrew their applications after learning that criminal history records would be checked. These officials acknowledged that such background checks may also deter a few qualified applicants who object to such checks due to privacy or other concerns. On balance, however, the officials said that the deterrent effect of national background checks was largely positive, that is, unsuitable applicants were being deterred from applying for child care-related positions. Further, officials at most of the organizations we contacted said that national fingerprint-based checks can be an important supplement to traditional screening tools, such as personal interviews, reference queries or follow-ups, and checks of local and state records. According to these officials, in screening applicants, child care entities should not rely solely upon checks of criminal history records—whether national, state, or local—because such records may be incomplete or even nonexistent for many unsuitable applicants. On the other hand, national fingerprint-based background checks may be the only effective way to readily identify the potentially worst abusers of children, that is, the pedophiles who change their names and move from state to state to continue their sexually perversive patterns of behavior. Further, national checks can identify out-of-state criminal histories involving certain offenses that although not directly involving child abuse, may nonetheless be important in considering an applicant’s suitability. These offenses include, for example, offenses involving drug possession or trafficking, assault or other violent acts, and theft—and even the offense of driving while intoxicated, which may have particular relevance in checking prospective applicants for positions involving transportation of children. By focusing on selected job positions, organizations, or local jurisdictions within each state, we were able to identify situations clearly showing the usefulness of national fingerprint-based checks. For example: An individual moved from Texas to California and obtained a teaching position in a special education program. In conducting a national background check in 1996, as requested by the California Commission on Teacher Credentialing, the FBI identified records showing that the individual had been convicted of sexual battery (rape) in Florida. In one school district in Florida, national fingerprint checks of noninstructional staff hired in 1995 resulted in the firing of at least seven individuals. The search of criminal history records showed that each individual had been convicted for a serious offense, such as drug possession or trafficking or aggravated battery. Fingerprint searches of prospective foster parents in Tennessee during the period October 1995 through May 1996 showed that 120 (or 9.3 percent) of 1,293 applicants had criminal felony records. Of the 120 criminal history records, 58 involved out-of-state records, which were not identifiable based solely upon a search of Tennessee records. In Texas, a local nonprofit youth-serving organization requested a total of 98 national fingerprint-based checks from August 1, 1995, through July 17, 1996. One applicant was rejected as a volunteer, in part, because the criminal history records showed a drug possession conviction. In Virginia, from July 1993 through June 1996, one county requested approximately 3,800 state and national fingerprint checks on new-hire school employees. A total of 111 individuals were subsequently fired on the basis that they had lied on their applications (claiming no criminal conviction). Appendix IV presents more details about these and related examples. However, due to an absence of reporting requirements, we were unable to obtain comprehensive statistics on the use and results of national fingerprint background checks requested by applicable groups within the five states. In 1993, the FBI estimated that IAFIS development would extend into fiscal year 1998, with costs totaling about $520.5 million. In October 1995, the FBI revised its schedule and cost estimates, projecting that completion of system development would slip 18 months (to about June 1999) and that costs would increase by over 20 percent (to between $630 million and $640 million). In a March 1996 status report submitted to the Senate Appropriations Committee, the FBI acknowledged that problems with various components prompted a decision (in February 1996) to adopt a new approach for developing and deploying IAFIS, which may lead to further revision of schedule and cost estimates. In its next status report, the FBI’s schedule and cost estimates were unchanged. However, in December 1996, in commenting on a draft of our report, FBI officials told us that the IAFIS completion date had been revised to July 1999. The new approach for developing and deploying IAFIS—reflecting the February 1996 decision mentioned above—called for the incremental availability of certain functions earlier in the process, rather than offering all IAFIS services at the final completion date. This incremental approach consists of six distinct segments or “builds,” with sequentially targeted completion dates (see app. V). For purposes of NCPA-related national fingerprint-based background checks, initial state participation in IAFIS is targeted for October 1998. At that time, according to the FBI, a “small number” of other federal and state users will be selected to implement IAFIS capabilities on a trial basis, which would provide the FBI an opportunity to test the system in an operational environment before accepting all other users in July 1999. State officials in California, Florida, Tennessee, Texas, and Virginia told us that they are aware of the equipment and software specifications and compatibility criteria necessary for interfacing with IAFIS and that their respective states plan to use the system. California and Florida plan to electronically process applicant fingerprint-based background checks when the FBI allows selected states to test this process, currently scheduled for October 1998. Tennessee and Virginia plan to interface with IAFIS whenever the system is available, which may be the system’s planned final completion date of July 1999. Texas plans to interface with IAFIS in 2000 after making necessary equipment purchases. On December 6, 1996, we met with officials from the Department of Justice, including the Senior Counsel to the Director, Executive Office of the United States Attorneys, and representatives from the FBI’s Criminal Justice Information Services to obtain comments on a draft of this report. Agency officials commented that the conclusions reached in the report are reasonable and that the report is sound and consistent with Department of Justice studies, reports, and other information on the subject of fingerprint-based background checks. Also, in addition to suggesting that the discussion of certain background topics be expanded, the officials provided technical comments and clarifications. We have incorporated these suggestions, comments, and clarifications where appropriate in this report. We are sending copies of this report to the Senate Committee on the Judiciary; the Chairman and the Ranking Minority Member, Subcommittee on Crime, House Committee on the Judiciary; the Attorney General; the Director, FBI; the Director, Office of Management and Budget; and other interested parties. Copies will also be made available to others upon request. Major contributors to this report are listed in appendix VI. If you have any questions about this report, please contact me on (202) 512-8777. By letter dated January 3, 1996, the then-Chairman of the Subcommittee on Youth Violence, Senate Committee on the Judiciary, requested that we review certain implementation issues under the National Child Protection Act of 1993 (NCPA) (P.L. 103-209), as amended by section 320928 of the Violent Crime Control and Law Enforcement Act of 1994 (P.L. 103-322). On the basis of the requester’s specific interest, we focused our work on children, even though NCPA’s provisions also apply to workers who have responsibility for the safety and well-being of the elderly and the disabled.As agreed with the requester, we conducted work in five states (California, Florida, Tennessee, Texas, and Virginia) to address the following questions: To what extent have selected states enacted statutes authorizing national background checks of child care providers? Also, what fees are charged for background checks of volunteers, and how do these fees compare with the actual costs in these states? What effects have these states’ laws and related fees had on volunteerism? For instance, have the laws and fees discouraged volunteers from participating in child care programs at nonprofit entities? Have selected state agencies and other organizations found national checks a useful screening tool? More specifically, for selected job or position categories in selected jurisdictions, how often have fingerprint-based background checks identified individuals with criminal histories? What is the status of the Integrated Automated Fingerprint Identification System (IAFIS) being developed by the Federal Bureau of Investigation (FBI), and what are the selected states’ plans for using the system when it becomes available? Initially, to obtain a broad understanding of NCPA requirements and implementation, state laws authorizing background criminal history checks, and available statistics and related information regarding child abuse, we contacted relevant public and private organizations that could provide national perspectives. At the federal level, for example, the FBI is responsible not only for developing IAFIS but also for reviewing and approving state laws that authorize national fingerprint-based searches of criminal history records. Another federal agency we contacted is the Department of Health and Human Service’s National Center on Child Abuse and Neglect. The Center administers the National Clearinghouse on Child Abuse and Neglect Information. A January 1995 report (Effective Screening of Child Care and Youth Service Workers) by the American Bar Association Center on Children and the Law also provided useful overview perspectives. Among other information, for example, the report presented the results of a national survey of the screening practices of approximately 3,800 child- and youth-serving agencies. Further, we contacted the National Collaboration for Youth, which is an affinity group of the National Assembly of National Voluntary Health and Social Welfare Organizations. The Collaboration has published guidance entitled Principles for Model State Legislation Implementing the National Child Protection Act (January 15, 1995). To obtain more detailed information about specific states’ automated fingerprint systems and statutory criminal history check provisions, as well as the views and concerns of organizations that recruit or use volunteers, we contacted relevant public agencies and at least three nonprofit entities in each of five judgmentally selected states—California, Florida, Tennessee, Texas, and Virginia. Generally, these selections were among the states suggested to us by officials at the public and private organizations mentioned above, that is, knowledgeable officials with national perspectives. Among other considerations, we selected states to reflect a range of (1) laws authorizing background checks and (2) experiences with automated fingerprint services. Also, in addition to geographical coverage, some specific factors we considered in selecting these five states are as follows: California leads all states in number of youth under age 18, according to U.S. Bureau of the Census data. Florida, according to Census Bureau data, is the fourth most populous state in terms of youth under age 18. Also, Florida law requires that instructional and noninstructional personnel hired to fill positions involving direct contact with students shall, upon employment, file a complete set of fingerprints. Tennessee was of specific interest to the requester. Under Tennessee law, effective January 1994, all persons applying for work (as a paid employee or as a volunteer) with children at a child welfare agency or with a religious, charitable, scientific, educational, athletic, or youth-serving organization may be required to (1) submit a fingerprint sample for criminal history background checks, or (2) attend a comprehensive youth protection training program, or (3) both submit a fingerprint sample and attend training. Texas, according to Census Bureau data, is the second most populous state in terms of youth under age 18. Also, the Volunteer Center of Dallas County (which recruits volunteers for more than 100 nonprofit entities located in the North Texas area) is the largest such centralized referral agency in the nation. According to the Virginia Department of Police, as of 1996, 42 of the state’s 135 public school district boards are required by state law to have school employee applicants undergo a national fingerprint check. In each of these states, we contacted the public agency responsible for criminal history records and/or fingerprint identification services to determine automation status and plans for connecting with IAFIS. Also, to determine trends in the number of reported child abuse cases, we contacted each state’s applicable social services agency. Further, in these 5 states, we contacted a total of 20 nonprofit entities—at least 3 in each state. Generally, we tried to ensure that we selected a variety of nonprofit entities on the basis of such factors as size (including some large, nationally affiliated entities as well as some smaller, independent local entities); gender of the youth served (boys, girls, or both); and functions (e.g., sporting, educational, and religious activities). Table I.1 lists all of the public (federal and state) and private organizations we contacted. Also, further details about the scope and methodology of our work regarding each of the objectives are presented in separate sections below. Department of Health and Human Services: —National Center on Child Abuse and Neglect Department of Justice: —Bureau of Justice Statistics —Criminal Division, Child Exploitation and Obscenity Section —FBI, Criminal Justice Information Services Division —Office of Juvenile Justice and Delinquency Prevention American Bar Association Center on Children and the Law (Washington, DC) Big Brothers/Big Sisters of America (Philadelphia, PA) Boy Scouts of America (Irving, TX) National Assembly of National Voluntary Health and Social Welfare Organizations (Washington, DC) National Association of State Directors for Teacher Education and Certification (Seattle, WA) National Center for Missing and Exploited Children (Arlington, VA) National Center for the Prosecution of Child Abuse (Alexandria, VA) National Collaboration for Youth (Washington, DC) National Committee to Prevent Child Abuse (Chicago, IL) National Conference of State Legislatures (Denver, CO) SEARCH Group, Inc. (Sacramento, CA) California public organizations (Sacramento): California Commission on Teacher Credentialing Department of Justice —Bureau of Criminal Identification and Information Department of Social Services —Adoptions Services Bureau —Community Care Licensing Division, Criminal Records Clearance Bureau Children’s Receiving Home of Sacramento Sacramento Court Appointed Special Advocate Program Sacramento Student Buddy Program Florida public organizations (Tallahassee): Bureau of Teacher Certification Commission on Community Service Department of Health and Rehabilitative Services, Division of Children and Family Services —District Two —Florida Abuse Registry Department of Law Enforcement, Division of Criminal Justice Information Systems Leon County School District American Red Cross, Capital Area Chapter (Tallahassee) Boy Scouts of America, South Florida Council (Miami) East Hill Baptist Church (Tallahassee) Florida Recreation and Park Association, Inc. (Tallahassee) Volunteer Center of Tallahassee, Sponsored by the United Way of the Big Bend (Tallahassee) (continued) Tennessee public organizations (Nashville): Tennessee Bureau of Investigation —Information Systems Division —Records and Identification Unit Department of Human Services Boy Scouts of America, Middle Tennessee Council (Nashville) Buddies of Nashville (an affiliate of Big Brothers/Big Sisters) Volunteer Center of Nashville Texas public organizations (Austin): Department of Protective and Regulatory Services Texas Education Agency Department of Public Safety Big Brothers/Big Sisters of America (Austin) North Texas State Soccer Association (Carrollton) St. Elizabeth Ann Seton Catholic Church (Plano) Tejas Girl Scout Council, Inc. (Dallas) Volunteer Center of Dallas County (Dallas) YMCA of Metropolitan Dallas (Dallas) Chesterfield County Public Schools (Chesterfield County) Department of Social Services (Richmond) —Child Abuse and Neglect Information Systems Division —Foster Care and Adoptions —Office of Volunteerism Department of State Police, Criminal Records Division (Richmond) Department of Youth and Family Services, Background Investigations Unit (Richmond) Henrico County Public Schools (Henrico County) To identify applicable legislation, we contacted the FBI’s Criminal Justice Information Services Division, which is responsible for approving state statutes that authorize child care-related organizations to request national fingerprint-based background checks. The FBI provided us its list of applicable state statutes, which were approved as of March 1996. In reference to the five study states, we verified the accuracy and completeness of the FBI’s list by contacting appropriate officials in each state and by reviewing each statute. Also, in reviewing the statutes, we looked for similarities and differences in terms of the various positions or work settings covered and whether national background checks were mandatory or simply permitted. Further, we contacted the FBI and applicable law enforcement agencies in the five selected states to determine their fee policies and amounts. During these contacts, we inquired whether the respective jurisdiction’s fees for background checks differentiated, for example, between for-profit and nonprofit entities and between paid employees and volunteers. Further, at the FBI and in the five state jurisdictions, we inquired about the availability of records, studies, or formulas showing how fees compare to the actual costs of conducting a background check. We reviewed available data on actual costs, but the scope of our work did not constitute a financial audit of costs. To identify whether any nonprofit entities have studied or self-reported on the effects of criminal history background check laws and the related fees applicable to volunteers at their organizations, we interviewed officials at (1) the National Collaboration for Youth; (2) the headquarters of two member organizations of the Collaboration, i.e., Boy Scouts of America and Big Brothers/Big Sisters of America; and (3) at least three nonprofit entities in each of the five selected states. As applicable and permitted by available data, we reviewed the scope and methodology of the studies identified by these contacts. To supplement the findings of any available studies regarding whether fees discourage volunteers from participating in child care programs, we obtained opinions, anecdotes, and other pertinent information from officials at the various national and local nonprofit entities contacted. The interview data—opinions and related information—are not projectable to nor representative of all nonprofit entities in the respective states. Regarding the usefulness of national fingerprint-based background checks of applicants for positions involving interaction with children, we obtained both quantitative data (e.g., number of applicants disqualified on the basis of criminal histories) and qualitative data (e.g., opinions offered by experienced managers responsible for personnel decisions at various organizations). In so doing, we first reviewed the five study states’ criminal history background check laws, which are approved by the FBI and authorize national fingerprint-based background checks for paid or volunteer positions involving child care. Within each of the five states, for selected jobs, work settings, or jurisdictions, we obtained available data on the number of national fingerprint checks requested from the FBI and, in turn, the number of “hits” based on criminal histories. Further, to determine whether these criminal history records were used or considered in actual personnel decisions, we followed up and contacted one or more applicable organizations at the local level. For example, if we were able to obtain hit data for schoolteachers in a given state, we contacted one or more local school districts to determine how many applicants were denied employment on the basis of the fingerprint-based background check. To obtain additional information about the usefulness of national fingerprint-based background checks of child care workers, we discussed the merits and problems of such checks with applicable public agency officials in each of the five selected states, as well as with officials of various national and local nonprofit organizations (see tab. I.1). Because our work covered only certain child care positions and locations within selected states, our findings may not be representative of statewide conditions in the respective state. Further, the findings cannot be projected to other states with similar positions because, among other reasons, state laws vary as to what constitutes a disqualifying crime. In determining the status of IAFIS, we focused on the FBI’s implementation schedule by reviewing available planning documents and status reports prepared by the Bureau. We did not undertake a detailed systems review; that is, we did not evaluate the technical merits of the design configurations nor of the performance objectives. Similarly, in contacting relevant agencies in five states, we did not undertake detailed systems reviews. Rather, our primary inquiries involved the extent to which each state planned to participate in IAFIS. Thus, for example, we inquired as to whether each state’s existing (or planned) automated fingerprint identification system was (would be) compatible with the standards necessary to connect or interface with IAFIS. Further, because effective background checks depend upon the availability of reliable records, we obtained information about the status of the five states’ efforts to automate their criminal history records, including final dispositions of cases. To obtain this status information, we contacted the applicable state agencies responsible for managing criminal history records, and we also reviewed the results of the most recent biennial survey conducted by the Department of Justice’s Bureau of Justice Statistics and SEARCH Group, Inc. On December 6, 1996, we met with officials from the Department of Justice, including the Senior Counsel to the Director, Executive Office of the United States Attorneys, and representatives from the FBI’s Criminal Justice Information Services to obtain comments on a draft of this report. Agency officials commented that the conclusions reached in the report are reasonable. The officials suggested that the discussion of certain background topics be expanded, and provided technical comments and clarifications. We have incorporated these suggestions, comments, and clarifications where appropriate into the report. The authority for the FBI to conduct criminal record checks for civil (noncriminal justice) licensing or employment purposes is based upon Public Law 92-544, enacted in 1972. Pursuant to the 1972 act, the FBI is authorized to exchange identification records with officials of state and local governments for purposes of licensing and employment if authorized by a state statute that has been approved by the U.S. Attorney General. The Access Integrity Unit within the FBI’s Criminal Justice Information Services Division is responsible for reviewing state statutes to determine if the statutes meet the applicable standards. The current standards used by the FBI in approving state statutes have been established by a series of memoranda issued by the Office of Legal Counsel, Department of Justice. Among other things, a state’s statutes must (1) specify the categories of jobs or positions covered; (2) require fingerprinting of the employee, licensee, or volunteer; and (3) authorize the use of FBI records for checking criminal history records of the applicant. NCPA did not give the states any new access to national fingerprint-based background checks and did not mandate the states to pass laws. Rather, NCPA highlighted the need for such checks of criminal history records and encouraged the states to pass appropriate legislation. According to Access Integrity Unit officials, as of March 1996, a total of 37 states had enacted one or more child care-related laws meeting the requisite criteria for the FBI to conduct fingerprint-based national checks of criminal history records. The following sections and tables summarize the applicable child care-related criminal history background check statutes for each of the states covered in our review—California, Florida, Tennessee, Texas, and Virginia. According to California officials, as early as the 1970s, California statutes authorized national fingerprint-based criminal history background checks for selected child care groups. Since then, as summarized in table II.1, California laws have been enacted or amended to either require or permit background checks on many categories of persons (including volunteers in some instances) applying to work with or provide care for children in California. Although several of the California statutes do not specifically refer to a national check, the statutes either require or permit a state background check. According to California Department of Justice officials, the state statutes under which agencies submit applicant fingerprints for national background checks have all been previously approved by the FBI. However, in recognizing that some of these statutes need to be revised to meet current federal standards, the officials commented substantially as follows: Over the years, the requirements or standards for access to FBI criminal history record information have evolved from a series of memoranda issued by the U.S. Department of Justice’s Office of Legal Counsel. Therefore, not all of California’s previously approved statutes meet the current requirements or standards. However, because California was granted prior authorization, the FBI has indicated that it will accept all fingerprint submission categories that were previously approved. The California Department of Justice has plans to advise relevant licensing and employment agencies that certain state statutes need to be revised to meet current standards for FBI access. According to statistics provided by the California Department of Justice, from July 1995 through June 1996, California requested 147,791 national fingerprint checks for applicant background checks, of which 50,434 were for peace officers and criminal justice employees. A California Department of Justice official told us that the majority of the remaining 97,357 national checks were submitted under child care-related statutes. For example, the California Commission on Teacher Credentialing requested 27,564 national checks for applicants from July 1995 through June 1996. Adoptions (various types) As table II.2 shows, Florida statutes call for mandatory rather than permissive checks and cover a range of positions or work settings dealing with children. The statutes cover personnel in most child care-related settings, except youth-serving organizations. In calendar year 1995, the Florida Department of Law Enforcement received 270,435 requests for national fingerprint-based checks of noncriminal justice applicants. Department officials, however, were unable to quantify exactly how many of these requests were for personnel in or applying for positions in work settings involving child care. However, officials at the Florida Department of Health and Rehabilitative Services said they requested around 100,000 national checks in 1995, and officials at the Florida Department of Education said they requested around 25,000 that year. Volunteers, employees, owners, and operators (continued) Certain Indian tribe programs (education, Head Start, and day care) Tennessee statutes authorizing national fingerprint-based criminal history background checks became effective January 1, 1994. As table II.3 shows, the laws cover a broad range of positions or work settings involving interaction with children in Tennessee. The statutes permit rather than require national checks and apply to new applicants only; that is, the statutes do not cover persons who were already in paid or volunteer positions as of January 1, 1994. Under these statutes, Tennessee agencies and organizations requested 1,522 national fingerprint checks for child care provider applicants during calendar year 1995. Only 2 of the 1,522 checks were for volunteer applicants. All the child care-related provisions of Texas law permit rather than require national checks. As table II.4 shows, many child care-related organizations in Texas are authorized to request national fingerprint-based background checks. However, in response to our inquiries, officials at the Texas Department of Public Safety said they were aware of very few child care-related national checks. Similarly, officials at the Texas Education Agency said they did not know if any school districts in the state had requested such checks. Officials at the Texas Department of Protective and Regulatory Services said the Department requested 1,195 national fingerprint-based checks in calendar year 1995, primarily on applicants providing child care. The state’s general statutory authority for requesting the FBI to conduct national fingerprint checks is Texas Code Annotated 411.087. This statute permits authorized entities to obtain criminal history record information maintained by the FBI. At the time of our review, Virginia statutes authorizing national fingerprint-based criminal history background checks of persons interacting with children in Virginia were limited to public schools and juvenile residential facilities, as table II.5 shows. The statutes are mandatory rather than permissive and apply only to persons who accept a paid or volunteer position, as applicable, after the effective date of the respective statute. According to Virginia Department of State Police officials, in calendar year 1995, approximately 10,000 national fingerprint-based criminal history background checks were conducted for the state’s public schools, and approximately 3,000 national checks were conducted for the state’s juvenile residential facilities. Under NCPA, the fees collected by the FBI and authorized state agencies, respectively, for fingerprint-based records checks of volunteers with a qualified entity may not exceed $18 or the actual cost, whichever is less. Also, the act specifies that the states shall establish fee systems (for fingerprint background checks) that ensure that fees to nonprofit entities for background checks do not discourage volunteers from participating in child care programs. At the time of our review, the FBI’s fee for national fingerprint-based criminal history checks was $18 for volunteers and $24 for all others. Before NCPA was amended in 1994, the FBI’s user fee policy was to charge $24 for processing each applicant’s fingerprint card. Table III.1 shows FBI fees for conducting fingerprint-based searches of criminal history records since October 1989. According to expenditure and workload data provided to us by the FBI, the Bureau’s costs were projected to average $18 for each fingerprint-based background search of criminal history records during fiscal year 1996. As table III.2 shows, this average included a handling charge of $2. Also, a surcharge of $6 was applied for each set of fingerprints processed for nonvolunteers. A complete check of criminal history records has both FBI and state agency components. Thus, in addition to the national check, four of the five states we studied (California, Tennessee, Texas, and Virginia) performed a fingerprint-based search of state records. Table III.3 shows selected states’ fees for conducting fingerprint-based checks of individuals seeking paid or volunteer positions with organizations serving children. As table III.4 shows, California’s fees for fingerprint checks of child care provider applicants ranged from $0 to $52, depending on the type of organization or agency involved and the speed of processing required. Table III.4: California’s Fees for Conducting Fingerprint-Based Checks of Child Care Providers and day care facilities with foster care (a pilot program) independent, and intercountry) child care workers) State fingerprint checks are free for all employees and volunteers at nonprofit youth service organizations and human resource agencies covered under California Penal Code section 11105.3. These organizations include Boy Scouts and Girl Scouts, sports leagues, nanny services, YMCAs, YWCAs, and newspapers (youth carrier supervisors). Volunteers at other nonprofit entities, including those licensed by the California Department of Social Services, can also get free state fingerprint checks under section 11105.3. However, information disseminated from these checks is restricted to arrests resulting in conviction (and arrests pending adjudication) for sex crimes, drug crimes, or crimes of violence. This limited dissemination does not permit a volunteer to perform the duties of a paid employee. Before being allowed to perform such duties, the individual would be required to have a more comprehensive $52 state check through the Department of Social Services, during which all arrest and conviction information is obtained. Employees and volunteers at for-profit youth-serving organizations and human resource agencies pay a $32 fee, which is California’s standard fingerprint processing fee. This fee is based on the California Department of Justice’s reported costs for processing applicant fingerprints. Department officials told us the processing costs per applicant averaged $32.62, which consisted of $13.84 in direct processing costs, $11.30 for file improvements, and $7.48 for workload enhancements. According to California Department of Justice officials, the maintenance costs for applicant processing are high because the Department retains most applicant fingerprint cards and, thus, is able to later notify applicable organizations and entities of any subsequent arrests of the individuals. The $42 fee consists of the standard $32 fee plus an extra $10 for expedited service (guaranteed turnaround in 17 working days). According to California Department of Justice officials, the extra $10 supports additional staff dedicated solely to processing requests for expedited service. The $52 fee applies to employees and volunteers not exempt from fees at facilities licensed by the California Department of Social Services’ Community Care Licensing Division. This fee consists of the standard $32 fee, plus $10 for expedited service and an additional $10 to help subsidize state checks done for the Division’s fee-exempt providers (foster care, family day care, and residential child care and day care facilities with six or fewer children). According to statistics provided by the California Department of Justice, about one-half of the approximately 118,000 state checks done for the Department of Social Services during July 1995 through June 1996 were fee-exempt. Florida Department of Law Enforcement officials told us that the state does not perform fingerprint-based checks of criminal history records for purposes of employment and licensing. These officials explained that the state’s computer system for fingerprint searches was not sufficient to handle such applicants, either for paid employees or for volunteers. Thus, our questions regarding the fees for and the actual costs of fingerprint checks were not applicable to Florida. However, for these noncriminal justice purposes, the state was performing name-based background checks and charging a fee of $15 for most applicants. The only exception is the $8 fee charged for applicant background checks required by the Florida Department of Health and Rehabilitative Services. For calendar year 1995, the Florida Department of Law Enforcement performed a total of 1,134,013 name-based searches for employment and licensing and in response to public requests. Further, 535,941 (or 47 percent) of the total requests for criminal history searches were accompanied by fingerprint cards, which the Department forwarded to the FBI for national checks. The state’s records were insufficiently detailed for us to determine how many of these national checks involved child care positions. However, many of the requests for national checks were submitted by various state agencies, such as the Florida Department of Banking and Finance and the Florida Department of Insurance, that obviously have no child care responsibilities. One exception is the Florida Department of Health and Rehabilitative Services, which is responsible for licensing and certifying facilities for the care of children. During 1995, this department requested over 100,000 national background checks. By state statute, Tennessee’s fee structure mirrors that of the FBI, which currently is $18 for volunteers and $24 for all others. Tennessee Bureau of Investigation officials told us that given this statutory basis for setting fees, the state has not attempted to calculate its actual costs. Texas’ fee is $15 for fingerprint checks of applicants, whether volunteers or nonvolunteers. This fee amount, according to Texas Department of Public Safety officials, was established in 1990 on the basis of a study that calculated actual costs totaling $11.42 per applicant. According to the study, this total consisted of employee costs ($5.81); supervisory costs ($1.46); utility, materials, and supply costs ($3.65); and data entry costs ($0.50). Even though these costs for fingerprint searches totaled $11.42 per applicant, the study recommended a fee of $15. This amount, according to the study, would ensure that Texas fees were consistent with other states’ fees for similar services. Texas officials told us that the fee was later changed to $17.25, as part of a general, across-the-board increase of 15 percent of all the state’s applicable fees. The officials explained, however, that the state legislature “rolled back” the fee to $15 in 1996. In calendar year 1995, the Texas Department of Public Safety conducted 115,398 fingerprint-based searches of applicants. Of this total, the Department forwarded 15,287 requests to the FBI for national searches. Of these national searches, 1,195 involved applicants for positions involving interaction with children. Virginia’s fee is $13 for fingerprint checks of applicants, whether volunteers or nonvolunteers. Virginia Department of State Police officials told us that this fee amount has been in effect for several years and, at the time of implementation, was set to match the FBI’s then-current fee. The officials said that the state has not calculated or analyzed the actual costs of conducting fingerprint checks. On the other hand, the officials noted that in 1993, the Department did analyze the costs for conducting name-based checks. At that time, according to these officials, the Department’s actual costs for a name-based check averaged $14.48 per applicant. The specific uses and results of fingerprint-based background checks were difficult to quantify in many cases because there were no reporting requirements and statistics were not routinely kept. Starting with the FBI, we tried to obtain the number of fingerprint-based criminal history checks relating to NCPA. The FBI performed 1,834,369 fingerprint-based criminal history checks in fiscal year 1995 for civil nonfederal applicants. However, the FBI was unable to disaggregate that figure to identify how many checks relating to NCPA were performed for child care purposes or volunteer organizations. For each of the five study states, we had similar difficulties obtaining comprehensive statistics on the use and results of national fingerprint-based checks for NCPA purposes. However, by focusing on selected job positions, organizations, or local jurisdictions within each state, we were able to identify situations clearly showing the usefulness of national fingerprint-based checks. According to California officials, state and FBI fingerprint-based background checks are requested on all individuals applying for credentials to work in California public schools, including new teachers, counselors, and administrators. The officials indicated that state checks have been conducted since 1951, and FBI checks since at least the 1970s. The California Commission on Teacher Credentialing is responsible for requesting the checks, reviewing the results, and determining whether applicants are or are not qualified. A California statute contains a list of offenses (e.g., drug-related and sexual assault offenses) that result in mandatory denials or revocation of credentials. In addition, a California statute provides the Commission with discretionary authority to deny credentials to any applicant who is guilty of the offenses listed therein. The Commission performs this function centrally for all of the state’s 7,818 public schools (as of October 1994). These schools had a total of 213,389 full-time credentialed employees for the period July 1994 through June 1995. Applicants pay $32 for the California Department of Justice state check and $24 for the national FBI check. These fees are part of the total fee to obtain California credentials, and applicants are not reimbursed. According to Commission officials, obtaining background check results from the FBI takes approximately 4 months. For the period July 1995 through June 1996, the California Commission on Teacher Credentialing requested 27,564 state and FBI background checks. Of these total checks, 540 criminal history reports (“rap sheets”) were received from the California Department of Justice, and 66 rap sheets were received from the FBI via the California Department of Justice. From July 1995 through June 1996, a total of 45 initial applicants were denied credentials for various reasons. Commission officials estimate that the fingerprint-based background checks were used as the basis for denial in about 95 percent of all denials. The officials added that in one or two cases each year, the background checks result in an automatic denial of a certification. The Commission did not have detailed statistics on the number of hits that resulted from FBI checks after the search of California’s records found no criminal histories. Commission officials told us that national checks are a key component in protecting the safety of schoolchildren and are worth doing even if they reveal only a few criminal histories not covered in California. The officials provided the following example of the usefulness of national checks: In 1996, an individual with a lifetime teacher certification from Texas noted on his California application that he had never been convicted of a felony. However, the FBI check requested by the Commission showed that the applicant had been convicted of sexual battery (rape) in Florida. This offense occurred before he was credentialed in Texas, which does not do national background checks as part of the teacher certification process. The California Commission may not have learned about this crime if not for the FBI check. The individual was teaching a special education program in California for 6 months before the results of the FBI check were received. There was no indication that any children were abused. The employee was dismissed. In addition to applicant background checks, if a credential holder is subsequently arrested or convicted of a crime, the California Department of Justice sends a “subsequent arrest notice” to the Commission. For the period July 1995 through June 1996, these notices resulted in 53 mandatory revocations, i.e., the individuals were convicted of a specified criminal offense involving drugs or sex. Also, on another 39 individuals, the Commission imposed “interim suspensions,” which are required by California law when an individual is criminally charged with a specified sex offense or when an individual pleads “no contest” to specified serious criminal offenses. In calendar year 1995, the Florida Department of Law Enforcement forwarded to the FBI 270,435 requests for national fingerprint-based checks of noncriminal justice applicants. The Department’s statistics did not show which agencies or entities in the state requested these checks. We focused our work in Florida on teachers and other school employees. Regarding applicants for certified instructional positions in Florida schools, state law does not specify disqualifying crimes. However, one of the requirements for qualification is good moral character. The Florida Bureau of Teacher Certification officials told us that requesting fingerprint-based background checks, reviewing the results, and determining whether the applicants are or are not qualified are their responsibilities. The officials added that the Bureau performs this function centrally for all of the state’s 67 public school districts, which had a total of 132,080 teachers in 1995. Florida Bureau of Teacher Certification officials disqualified a total of 56 applicants for the 1995 school year (July 1, 1995, through June 30, 1996). However, the officials noted that even though the Bureau may disqualify as many as 100 applicants a year on the basis of criminal history records, most of these decisions are reversed on appeal. More specifically, these officials commented substantially as follows: The Florida Bureau of Teacher Certification has requested a total of about 25,000 national fingerprint-based checks annually in recent years. Checks conducted on applicants during school year 1995 resulted in identification of 1,079 individuals with criminal history records. Of these total hits, the Bureau determined that 56 individuals should not be certified to teach, and the Bureau provided each of these individuals a written notification of disqualification. After receiving such notification, 37 of the 56 individuals appealed to a centralized review board, which reversed all but 5 of the Bureau noncertification decisions. Thus, after the appeals process, the remaining number of adverse personnel actions based upon criminal histories was 24 (i.e., the 5 unsuccessful appellants, plus the 19 applicants who did not appeal their disqualification notifications). In making its decisions, the review board considered the date of the offense, the severity of the offense, and any rehabilitation measures the applicant had taken (e.g., drug abuse counseling and treatment). The review board does not view offenses such as petty theft or bad check writing to be serious offenses. Similar to the Florida provision relating to instructional positions, Florida law also does not specify disqualifying crimes regarding applicants for noninstructional positions in the state’s schools. But, one of the requirements for qualification is good moral character. Each school district is responsible for determining whether the applicants or hirees are or are not qualified. We obtained available information from one school district in Florida—Leon County School District. In calendar year 1995, the district had a total of 5,653 teachers and noninstructional personnel, of which 1,260 were newly hired. The district requested national fingerprint-based background checks on the 1,260 new personnel. In response to our inquiries, the district’s personnel office could not readily disaggregate the total number of new hires into teacher and noninstructional staff categories. However, office staff did provide the following information: Of the fingerprint-based background checks requested in 1995 for noninstructional personnel, about 40 percent resulted in identification of criminal records. Of these staff, about 100 had a criminal history serious enough for the district to send each individual a letter asking for explanations about the crimes. After the Affirmative Action Director received full explanations and documentation from the employees, about 10 people were fired because of their criminal history records. Of those fired, about four or five appealed those decisions, and, as a result, two or three were reinstated. In summary, seven or eight noninstructional personnel remained fired based on the criminal history records checks.The records showed that each individual had been convicted for a serious offense, such as drug possession or trafficking and aggravated battery, within the previous 7 years. Under a Tennessee law, which took effect in January 1994 (see app. II), child welfare agencies can require state and national fingerprint-based background checks on all persons applying to work with children. In October 1995, the Tennessee Department of Human Services started requiring such checks on prospective foster care parents and social services employees who will be working with children. The Department did not plan to check foster care parents who were already in the system as of October 1995. The Department pays the $24 state fee and the $24 FBI fee. State check results are received in less than a month, and FBI results are received in 6 weeks to 2 months. From October 1995 through May 1996, the Tennessee Department of Human Services requested 1,293 state and FBI fingerprint checks for foster care applicants and social services employees. Of the 1,293 checks, 120 (or 9.3 percent) showed felony criminal records. Felony records included enticing a child to enter a house for immoral purposes, accessory to murder, aggravated assault with weapons on a family, smuggling drugs, delivery of drugs, receiving and concealing stolen property, and grand larceny. The national check identified 58 of the 120 felony records, which were not found via the state check. In one case, the FBI check revealed that a new foster care parent for four children had served a 3-year prison sentence in Alabama for enticing a child to enter a house for immoral purposes. As a result of the background check, the Tennessee Department of Human Services immediately removed the four children from the convicted felon’s home. Department officials told us there was no evidence that any of the children had been abused. In another case, a foster care applicant told Department officials he was in the FBI’s witness protection program but did not disclose why. The FBI check revealed that the applicant had been arrested in another state for accessory to murder. Therefore, the foster child already placed in his home was removed. Under Texas law (see app. II), Big Brothers/Big Sisters of America is one of four nonprofit organizations specifically authorized to request national fingerprint-based background checks through the Texas Department of Protective and Regulatory Services. We selected Big Brothers/Big Sisters because it is the only nonprofit organization of the four authorized that requested national fingerprint-based criminal history checks. Of the 21 Big Brothers/Big Sisters affiliates in Texas, only 1 affiliate requested national fingerprint-based checks. In response to our inquiries, affiliate representatives commented substantially as follows: The affiliate requested a total of 98 national fingerprint-based checks during the period August 1, 1995, through July 17, 1996. Of this total, two applicants were found to have criminal records. The affiliate still accepted one of these individuals as a volunteer because the criminal history record involved an incident (theft under $20) that occurred about 22 years ago, and the other indicators in the screening process (e.g., interviews and references) showed no concerns. The other applicant was rejected as a volunteer because the criminal history record showed a drug possession conviction about 6 years ago; also, during the interview and screening process, the applicant exhibited behavior that raised some concerns. Under Virginia law (see app. II), 42 of the state’s 135 school boards are required to request state and national fingerprint-based background checks of applicants as a condition of employment. Under the law, the school boards must take into account charges or convictions of specified crimes. In calendar year 1995, approximately 10,000 state and national checks were conducted. The background checks are not part of a centralized credentialing process. Rather, each school board is responsible for requesting the checks, reviewing the results, and determining whether applicants are or are not qualified. The results are not shared with other counties. Therefore, a teacher moving from one county to another would require new checks, and a substitute teacher working in multiple counties would require multiple checks. We contacted personnel offices in 2 of the 42 school districts— Chesterfield and Henrico counties—to obtain views on the usefulness of state and FBI background checks. According to responsible officials, since the background checks have begun, over 100 individuals in these 2 school districts have been fired on the basis of their criminal history records. The officials commented that the background checks have revealed only two individuals with criminal records involving child abuse. In one case, for example, the criminal history record showed that the individual set fire to a house with children inside. Officials from both counties told us the checks are definitely a deterrent. One of the officials added that the checks would still be worth the cost even if they revealed no criminal records. Another official told us the checks are worth the cost if only one child is saved from abuse. In response to our inquiries, school personnel officials in Chesterfield County commented substantially as follows: With 55 schools, approximately 50,000 students, and about 5,800 employees (not including substitute teachers and volunteers), Chesterfield County is one of the largest public school districts in Virginia. The county has been conducting state and FBI fingerprint checks since July 1990. These checks cover all new full-time and part-time hires (teachers, janitors, food service workers, etc.) and rehires who have not been employed by the school district for more than 2 years. Substitute teachers and volunteers are not checked at this school district. Employees who were on board in July 1990 were not checked, and employees are not periodically rechecked. Chesterfield County pays the $13 state fee and the $24 FBI fee. State check results are received in about 2 weeks, and FBI results are received in about 2 months. From July 1, 1995, through June 5, 1996, Chesterfield County requested about 675 to 700 state and FBI checks, of which 32 (or about 4.6 percent) found criminal records. Although not specifically quantifiable, the majority of these criminal records involved Virginia offenses. The number of hits resulting from FBI checks (i.e., hits after the state’s checks found no criminal histories) has been relatively small. Since July 1990, the total in this category has been five or six hits. Of these, two or three cases resulted in the employee’s being dismissed. In one case, for example, a custodial worker in a Chesterfield County public school was found to have an outstanding fugitive warrant in Maryland for a traffic violation. After this was learned, the police were notified and the employee was dismissed. Also, in about six cases each year, employees are dismissed because the state or FBI check revealed that the employee falsified the application. Most of these employees worked in the custodial area, which raised a concern about theft, since these employees had unsupervised access to equipment and supplies. In response to our inquiries, school personnel officials in Henrico County commented substantially as follows: With 58 schools, over 38,000 students, and about 4,200 full-time employees, Henrico County is one of the 10 largest public school districts in Virginia. The county has been requesting state and FBI fingerprint background checks since July 1993. These checks cover all new full-time, part-time, and temporary employees (teachers, substitute teachers, janitors, food service workers, etc.) and rehires who have not been employed by the school district for more than 2 years. Employees who were on board in July 1993 were not checked, and employees are not periodically rechecked. County officials want to do state and national checks on all volunteers, but the school board historically has not wanted the checks. Henrico County pays the $13 state fee and the $24 FBI fee. State check results are received in about 2 to 3 weeks, and FBI results are received in about 4 weeks to 2 months. From July 1993 through June 1996, Henrico county requested approximately 3,800 state and FBI checks (about 1,200 a year) on new hires. Of this total, 137 (or 3.6 percent) resulted in identification of applicants with criminal records. The majority of these hits involved Virginia criminal records. As a result of these hits, 111 of the 137 new hires were fired. The other 26 new employees were not fired because the individuals (1) showed that information in the criminal history records was inaccurate or (2) had acknowledged their criminal history in completing the application form. The 111 firings were justified on the basis that the individuals had lied on their applications (claiming no criminal conviction) and not because of the nature of their criminal records. Ten or fewer of these 111 employees had criminal records identified by the FBI, following a state check showing no records. The FBI describes IAFIS as being a large, technologically complex system that will support the exchange of criminal history information among federal, state, and local agencies using a variety of media, standard formats, and communication protocols. Presently, fingerprint checks are initiated through the submission of criminal or civil 10-print fingerprint cards. During fiscal year 1995, the FBI received and processed over 9 million fingerprint cards submitted by federal, state, and local criminal justice organizations for criminal and applicant purposes. For many users, the development of IAFIS should eliminate the need to transport and process paper fingerprint cards. Fingerprints are to be captured electronically at booking stations or other locations and transmitted through a high-speed telecommunications network to an applicable state agency and the FBI for processing. Also, the FBI’s present inventory of criminal fingerprint cards is to be electronically scanned, converted into digital images, and stored in an IAFIS database to facilitate on-line retrieval. To meet the goal of providing computerized criminal history and identification services, IAFIS is designed to have three major subsystems or components: The Interstate Identification Index is an existing federal-state cooperative system for exchanging criminal history records. The Index contains federal criminal history files and also provides access to state-level centralized repositories of criminal history records. With the development of IAFIS, some or all of the Index’s hardware and software is to be replaced. A new Identification Tasking and Networking subsystem is to provide the workstations, workflow control, internal telecommunications, and image files necessary to support “paperless” processing. A new Automated Fingerprint Identification System is to provide fingerprint searching capabilities. The System is to first digitize the fingerprint image (if not already digitized, as it is when received from a scanning device). Then, in processing the digitized image, searchable fingerprint characteristics are to be extracted (e.g., ridge-ending locations and orientations). In a background check, the appropriate subfile of fingerprints is to be searched for the applicable characteristics. A resulting candidate list of file fingerprints (the most probable matches) is to be generated and provided to a fingerprint examiner, who decides which (if any) of the candidates represents a positive identification. By “integrating” these three components—an upgraded Interstate Identification Index capability, a new Identification Tasking and Networking subsystem, and a new Automated Fingerprint Identification System—IAFIS is to provide a more efficient interface for state and local users. In 1996, in response to concerns about cost increases and schedule slippages, the FBI adopted a new approach for developing and deploying IAFIS. This approach, as shown in table V.1, involves six separate segments, or “builds.” The build dates will not be finalized until the completion of the negotiations with the various development contractors. Developed and deployed an automated capability to conduct searches from fingerprints found at a crime scene against a 200,000 record database. Increase the searchable database in build A to 500,000 files. Provide a limited fingerprint search capability (about 10 percent of eventual capacity) and a stand-alone fingerprint image repository. Integrate a high-volume fingerprint scanning capability and the capability to compare images on a computer screen. This build is intended to decrease fingerprint card processing time and decrease retrieval time for candidate matches. Allow some selected remote users to search IAFIS database and retrieve images. Complete the development. Also add several new services, including the storage and retrieval of mug shots. Although the FBI does not have a schedule specifically showing when the states, or which states, will use IAFIS for applicant criminal history check purposes, table V.1 does show that state participation in the system is not to begin until “build E,” which is scheduled to be on-line in October 1998. At that time, according to the FBI, a “small number” of other federal and state users are to be selected to implement IAFIS capabilities on a trial basis. FBI officials told us that build E would provide an opportunity for checking the system in an operational environment before the remaining users are accepted in build F. For nearly a century, the criminal justice community has used fingerprint identification. Over the last 2 decades, manual fingerprint processing has given way to increased use of automation. Today, many states and cities have some form of automated fingerprint identification system. Thus, in designing IAFIS, the FBI was very cognizant that the “connectivity” of the integrated system with the state and local law enforcement community would be a challenge. To prepare for this challenge, the FBI worked with the National Institute of Standards and Technology to hold a series of workshops nationwide during 1990 and 1991. These forums were attended by officials from federal, state, and local law enforcement agencies and by representatives from all of the major vendors for automated and live-scan fingerprint equipment. The resulting national standards for the transmissions of fingerprint data have been approved by the American National Standards Institute. Among other purposes, these standards were to provide a basis for state and local law enforcement officials to begin planning to ensure that their agencies had the capability to participate in the new federal system. However, it is important to note that no agency is required to participate in IAFIS. Each state can decide the extent to which it wants to be “connected” to and compatible with IAFIS. That is, each state must decide for itself what equipment and system changes or upgrades are needed (if any), desirable, and affordable. Recognizing that the various states are at different stages of automation with respect to fingerprint identification services, the FBI is planning to accommodate different levels of participation in IAFIS—ranging from minimal to full participation. At the minimal end, for example, some states may decide to continue using the U.S. Postal Service to transmit paper fingerprint cards. For this reason, as noted earlier, IAFIS will have a “Fingerprint Image Capture System” that will allow the FBI to scan and digitize data from these cards. The fuller levels of participation will be dependent upon the states’ already having or later acquiring (1) standards-compatible equipment and/or (2) special purpose computer programs (“controllers”) to provide format conversions. Because effective background checks depend upon the availability of reliable records, we obtained information about the status of the five states’ efforts to automate their criminal history records. As of the time of our review, the most recent biennial survey (conducted by SEARCH Group, Inc.) provided a report of each state’s status as of the end of calendar year 1995. At that time, as table V.2 shows for the five states covered in our review, all five had fully automated the master name index, and three of the five had fully automated the arrest records. Also, as further shown in table V.2, even though Virginia had the lowest percentage of automated arrest records among the five selected states, Virginia also had the highest percentage of automated records for arrests within the past 5 years that had final dispositions (e.g., dismissals, acquittals, or convictions) recorded. Table V.2: Overview of Selected States’ Criminal History Records Systems (as of December 31, 1995) Danny R. Burton, Assistant Director, Administration of Justice Issues Jeanne M. Barger, Evaluator-in-Charge R. Eric Erdman, Evaluator Donna B. Svoboda, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed certain implementation issues under the National Child Protection Act of 1993, focusing on: (1) the extent to which selected states have enacted statutes authorizing national background checks of child care providers, the fees charged for background checks of volunteers, and how these fees compare with the actual costs in these states; (2) the effects these states' laws and related fees had on volunteerism; (3) whether selected state agencies and other organizations found national background checks a useful screening tool, and how often fingerprint-based background checks identified individuals with criminal histories; and (4) the status of the Integrated Automated Fingerprint Identification System (IAFIS) being developed by the Federal Bureau of Investigation (FBI), and the selected states' plans for using the system when it becomes available. GAO found that: (1) although there are considerable differences in scope or coverage, each of the five study states has enacted statutes authorizing national fingerprint-based background checks regarding paid and, or volunteer positions at various types of child care-related organizations; (2) three of the five states, California, Tennessee, and Texas, have authority to request national checks of volunteers at nonprofit youth-serving organizations; (3) however, these states do not require that national checks be done, and few checks have been requested; (4) a complete check of criminal history records has both FBI and state agency components; (5) the FBI's fee for national fingerprint-based background checks of volunteer applicants is $18; (6) the FBI projected that its costs for a national check would average $18 in 1996; (7) state laws and related fees did not appear to have negatively affected volunteerism at the various nonprofit youth-serving organizations GAO contacted, since applicable statutes permitted rather than required fingerprint-based background checks, and few had been requested; (8) officials at the various organizations GAO contacted said that national checks are or could be a useful tool that should supplement rather than supplant other important screening practices; (9) these officials told GAO that they believe the prospect of being subjected to a national background check deters an indeterminate but significant number of individuals with unacceptable criminal histories from even applying for certain positions; (10) for selected job positions, organizations, or local jurisdictions in the five study states, GAO found that national checks detected some applicants with criminal histories who may not have been detected by less comprehensive practices, including state background checks; (11) according to the FBI, in October 1998 IAFIS is scheduled to be available to a few selected states, for the purposes of conducting national fingerprint checks of applicants, with all other states that have appropriate technology coming online by July 1999; and (12) once IAFIS is fully implemented, the FBI expects that the processing time for national fingerprint checks will be reduced from 7 weeks (not including mailing time) under current processes to about 24 hours.
FTA’s primary source of funding for new fixed-guideway projects or extensions to existing fixed-guideway systems is the Capital Investment Grant program, which is a discretionary program funded from annual appropriations rather than the Highway Trust Fund. Over the past 10 fiscal years, FTA has provided states, cities, and other localities with almost $18 billion in federal funding to plan and build new projects through this program. Projects eligible to compete for federal funding under the Capital Investment Grant program include: Commuter rail—systems that operate along electric or diesel- propelled railways and provide train service for local, short distance trips between a central city and adjacent suburbs. Heavy rail—systems that operate on electric railways with high- volume traffic capacity and are characterized by separated right-of- way, sophisticated signaling, high platform loading, and high-speed, rapid-acceleration rail cars operating singly or in multi-car trains on fixed rails. Light rail—systems that operate on electric railways with light-volume traffic capacity and are characterized by shared or exclusive rights-of- way, or low or high-platform-loading, single or double-car trains, and overhead electric lines that power rail vehicles. Streetcars—systems that are similar to light rail, but distinguishable because they are usually smaller and designed for shorter routes, more frequent stops, and lower travel speeds. Bus rapid transit—systems in which the majority operates in a separated right-of-way during peak periods and includes features that emulate the services provided by rail transit, such as defined stations, traffic signal priority, short headway bidirectional services for a substantial part of weekdays and weekend days, pre-board ticketing, platform-level boarding, and separate branding. Fixed-guideway bus rapid transit systems may include portions of service that are non- fixed guideway. In addition, bus rapid transit can also include corridor- based bus rapid transit projects, which have similar characteristics as fixed-guideway systems, but the majority of the project does not operate in a separated right-of-way dedicated for public transportation use during peak periods. Ferries—systems comprised of vessels that operate over a body of water and are generally steam or diesel powered. These projects are designed and implemented by project sponsors, which are usually local transit agencies, often in coordination with local metropolitan-planning organizations. Within the Capital Investment Grant program, project sponsors have typically applied for funding as either a New Starts or a Small Starts project. Under MAP-21, New Starts projects include new fixed-guideway projects, extensions to fixed- guideway projects, and fixed-guideway bus rapid transit projects that have a total capital cost of $250 million or greater or a Capital Investment Grant program contribution of $75 million or greater. Small Starts projects include new fixed-guideway projects, extensions to fixed-guideway projects, and both fixed-guideway and corridor-based bus rapid transit projects that have a total net capital cost less than $250 million and a Capital Investment Grant program contribution less than $75 million. Prior to the enactment of MAP-21, the Capital Investment Grant program was governed by statutory provisions put in place under SAFETEA-LU. MAP-21, which was enacted in July 2012, made numerous changes to the program. For example, MAP-21 reduced the number of phases in the process that projects must follow to be eligible for and receive federal funding. Under SAFETEA-LU, project sponsors were required to identify the transportation needs of a specific corridor and evaluate a range of alternatives to address locally identified problems in that corridor during what was called the alternatives analysis phase. To complete this phase, project sponsors selected a locally preferred alternative to be advanced for further development after costs, benefits, and impacts of each alternative were analyzed. However, under MAP-21 the process relies on the review of alternatives performed during the metropolitan transportation planning and the National Environmental Policy Act of 1969 (NEPA) environmental review processes. In addition, MAP-21 created a new category of eligible projects called Core Capacity Improvement projects, which are substantial corridor-based capital investments in existing fixed-guideway systems that increase the capacity of a corridor by at least 10 percent in a corridor that is at or above capacity today or is expected to be within 5 years. Core Capacity Improvement projects can include expanding system platforms, the acquisition of real property, rights-of-way, and rolling stock associated with increasing capacity, among other things, and cannot include elements to improve general station facilities, parking, or elements designed to maintain a state of good repair. Under MAP-21, any project that fits the definition of a new fixed-guideway project or an extension to an existing fixed-guideway system is eligible to compete for federal funding under the Capital Investment Grant program. Once a project sponsor decides to seek Capital Investment Grant program funding it submits an application to FTA consisting of information on the proposed project, such as a description of the transportation problem the project is seeking to address, among other requirements. If accepted into the program, the process that project sponsors must follow varies depending on whether the project is a New Starts, Small Starts, or Core Capacity Improvement project (see fig. 1). New Starts and Core Capacity Improvement projects. New Starts and Core Capacity Improvement projects must complete two phases in the development process to be eligible for a Construction grant agreement—Project Development and Engineering. During the Project Development phase, among other requirements, the Secretary must determine that the project has been selected as the locally preferred alternative at the end of the environmental review process. Under MAP-21 changes to the Capital Investment Grant program, New Starts and Core Capacity Improvement projects have 2 years after the day in which they enter into Project Development to complete the activities required to obtain a project rating by FTA, a process that is discussed further below. If approved to advance into the second phase of the development process—Engineering—project sponsors must, among other things, develop a firm and reliable cost, scope, and schedule for the project and obtain all non-Capital Investment Grant program funding commitments. Small Starts projects. Small Starts projects complete a similar but more streamlined process that requires project sponsors to complete only one phase—Project Development—to be eligible for a Construction grant agreement. During this phase, the Secretary must also determine that the project has been adopted as the locally preferred alternative and the project sponsor must complete the environmental review process. To complete Project Development, project sponsors must develop a firm and reliable cost, scope, and schedule for the project and obtain all non-Capital Investment Grant program funding commitments, among other things. Before FTA can recommend a project to Congress for funding, it is required by law to rate the project by using a number of criteria designed to provide important information about project merit. While New Starts and Small Starts project justification criteria have changed over time, there are currently six criteria: mobility improvements, environmental benefits, cost-effectiveness, economic development, land use, and congestion relief. In contrast, the project justification criteria for Core Capacity Improvement projects are: mobility improvements, environmental benefits, cost-effectiveness, economic development, congestion relief, and existing capacity needs of a corridor. FTA is also required to evaluate and rate the local financial commitment to the project and the project sponsor’s ability to operate the project and continue to operate the existing transit system. FTA is also required to rate each individual criterion on a five point scale, from low, medium-low, medium, medium-high, and high. As we have previously reported, FTA prepares and combines a summary project justification, which is based on the ratings of the six criteria, and a summary local financial commitment rating to arrive at a project’s overall rating, as shown in figure 2. To advance through the development process and be eligible for funding, proposed projects must score at least a medium overall project rating (which requires at least a medium rating for both the summary project justification and the summary local financial commitment). In order to recommend a project for a grant agreement in the President’s budget, FTA considers the evaluation and rating of the project under the specified criteria, availability of Capital Investment Grant program funds, and the readiness of the proposed project. Projects that compete for Capital Investment Grant program funding are formally overseen by FTA with the help of contractors, who provide assistance to FTA with oversight of planning, construction, and financing of projects throughout the development process. FTA and its contractors evaluate each project’s risk, scope, cost, schedule, financial plan, and project management plan, as well as the project sponsor’s technical capacity and capability—before recommending a project for funding. Throughout the development process, project sponsors submit periodic updates to FTA on different aspects of their projects, such as on project cost, schedule, projected ridership, and the financing of the projects. FTA maintains its headquarters in Washington, D.C., with 10 regional offices throughout the continental United States, to assist with project oversight. As mentioned previously, this report focuses on the statutory, regulatory, and other FTA requirements applicable to the Capital Investment Grant program under MAP-21. In December 2015, the FAST Act was enacted. In addition to significantly altering or repealing some of the MAP-21 requirements, the FAST Act also made other changes to the Capital Investment Grant program’s processes. According to FTA officials, some of those key changes include: (1) raising the dollar threshold for eligibility for New Starts and Small Starts projects, (2) increasing the number of projects eligible for funding by allowing joint public transportation and intercity passenger rail service and commuter rail projects to be eligible for funding, and (3) eliminating a requirement that corridor-based bus rapid transit projects must provide weekend service to be eligible for funding. We plan to examine FTA’s implementation of the FAST Act in future work on the Capital Investment Grant program. FTA has made progress implementing most of the key changes MAP-21 made to the Capital Investment Grant program. As shown in table 1, FTA has issued policy guidance outlining the new review and evaluation process and criteria for New Starts, Small Starts, and Core Capacity Improvement projects and also provided project sponsors with instructions on how they can request to pre-qualify for a satisfactory rating based on the characteristics of their project, otherwise known as warrants. However, FTA has not completed the rulemaking required to fully implement the MAP-21 changes or fully addressed all requirements, such as the requirement to establish an evaluation and rating process for programs of interrelated projects, all of which we discuss below. FTA officials told us they are working toward addressing the remaining requirements. FTA has promulgated new rules for the Capital Investment Grant program but plans to initiate the rulemaking necessary to fully implement the changes MAP-21 made to the program in the future. Specifically, MAP-21 required FTA to issue rules establishing an evaluation and rating process for new fixed-guideway capital projects as well as Core Capacity Improvement projects. In January 2013, FTA issued a final rule establishing a new regulatory framework for the evaluation and rating of New Starts and Small Starts projects. FTA initiated this rulemaking—by issuing a Notice of Proposed Rulemaking—prior to the enactment of MAP-21, and FTA’s final rule covers portions of the evaluation and rating requirements for New Starts and Small Starts projects that MAP-21 did not significantly change. According to FTA, future rulemaking will cover new items included in MAP-21 that have not yet been the subject of the rulemaking process, such as the evaluation and rating process for Core Capacity Improvement projects and the revised processes for New Starts and Small Starts projects. FTA officials told us they plan to address the remaining requirements of MAP-21 and now the Fast Act in future rulemaking. They noted that they still have to review the changes the FAST Act made to the Capital Investment Grant program and that factors outside of their control could delay their efforts. FTA provided project sponsors with updated policy guidance for the Capital Investment Grant program in both 2013 and 2015 and plans to update its policy guidance again in 2017. MAP-21 required FTA to issue policy guidance specifying the review and evaluation process and criteria for new fixed-guideway capital projects and Core Capacity Improvement projects and issue updated guidance each time FTA makes significant changes to the rating process and criteria, but not less frequently than once every 2 years. Concurrent with the January 2013 issuance of the final rule, FTA solicited public comment on its proposed policy guidance for New Starts and Small Starts projects and, in August 2013, issued policy guidance covering the evaluation and rating process for New Starts and Small Starts projects. In April 2015, FTA again solicited public comment on its proposed policy guidance for the evaluation and rating process for Core Capacity Improvement projects along with other topics not included in FTA’s August 2013 guidance, such as the new congestion relief criterion and the ways in which projects can qualify for warrants. Subsequently, FTA issued updated policy guidance for the program in August 2015. FTA has stated its August 2015 guidance will serve as a guide for running the Capital Investment Grant program until it completes the rulemaking to fully implement the MAP-21 changes and now the requirements of the FAST Act. In addition to covering the evaluation and rating process for Core Capacity Improvement projects, FTA’s August 2015 policy guidance also: Set a deadline for project development: MAP-21 specified that New Starts and Core Capacity Improvement projects have 2 years after the day in which they enter into Project Development to complete the activities required to obtain a project rating by FTA. In addressing this requirement, FTA’s policy guidance encourages project sponsors to begin planning early, noting that project sponsors may wish to conduct early work, such as initiating the environmental review process, prior to requesting entry into Project Development. Implemented a new congestion relief criterion: MAP-21 added congestion relief as a project justification criterion for projects while removing operating efficiencies as a criterion, and under FTA’s policy guidance, congestion relief is calculated based on the number of new weekday linked transit trips that are projected to result from a project’s implementation. Utilized the new definition of bus rapid transit as set out in MAP-21: According to an FTA official, the new definition of bus rapid transit represented a significant change because it impacts funding eligibility. For example, the new definition required eligible bus rapid transit projects to have short headway bi-directional service for a substantial part of weekdays and weekend days, which was not the case under SAFETEA-LU. FTA, in turn, defined the interval of time required for service during peak periods and during other times of the day and made other related determinations. FTA officials told us they anticipated soliciting public comment on FTA’s policy guidance again later this year or in 2017 in order to meet the MAP- 21 requirement that FTA issue new guidance no less than every 2 years. FTA plans to address the programs of interrelated project provisions of MAP-21 through future rulemaking and policy guidance updates. FTA officials told us that before they could begin working to address these provisions, they first needed to establish the evaluation and rating process for Core Capacity Improvement projects because a Core Capacity Improvement project could be one of the interrelated projects. FTA’s August 2015 policy guidance covers the evaluation and rating process for Core Capacity Improvement projects; however, officials also said that some aspects of the law related to programs of interrelated projects were unclear and made it difficult to implement. For example, MAP-21 did not specify which evaluation criteria FTA should use to rate programs of interrelated projects that include more than one type of project. At the time of our review, FTA was working with Congress to address these issues and, in December 2015, the FAST Act was enacted, which officials told us provided the clarification they sought. FTA officials told us they plan to address these provisions in future rulemaking and policy guidance updates; however, they had no firm date for when these provisions would be implemented and noted it would take time. Figure 3 shows an illustrative example of a proposed program of interrelated projects consisting of two Core Capacity Improvement projects and one Small Starts project in Dallas, Texas. FTA is finalizing the development of a tool that will help officials determine the level of review required of project sponsors based on a number of risk factors, such as the total cost and complexity of a proposed project and the project sponsor’s in-house technical capacity and capability. According to FTA officials, this tool, once complete, will address the MAP- 21 requirement that FTA use an expedited technical-capacity review process for project sponsors under certain circumstances. Specifically, the expedited review would be used for project sponsors that have recently and successfully completed a project that achieved budget, cost, and ridership outcomes consistent with or better than projections and that has demonstrated continued staff expertise and other resources necessary to implement a new project. FTA officials estimated that the development of this tool would be completed over the next few months. At the time of our review, FTA had provided project sponsors with instructions on how to request the use of warrants; however, it was too early to tell the extent to which FTA will be able to make greater use of warrants. Warrants are ways that proposed projects can pre-qualify for a satisfactory rating on a given criterion based on the characteristics of a project or the project corridor as long as the Capital Investment Grant program’s share of the project does not exceed $100 million or 50 percent of the project’s cost and the applicant certifies that its existing public- transportation system is in a state of good repair. For example, New Starts projects can qualify for an automatic rating of medium for some criteria as long as the total capital cost of the proposed project and the number of existing weekday transit trips in the corridor meet certain eligibility criteria, among other things. FTA’s August 2015 policy guidance specified the parameters that FTA will use to determine if projects are eligible for warrants and provided project sponsors with instructions on how to request the use of warrants. FTA officials told us that for the most recent rating cycle—which is also the first rating cycle in which FTA allowed the use of expanded warrants—three project sponsors requested warrants and FTA determined two were eligible. According to FTA officials, it will take several rating cycles and feedback from project sponsors before the officials will have enough information to assess the effect of expanded warrants. The selected project sponsors we contacted were generally supportive of the changes MAP-21 made to the Capital Investment Grant program and of FTA’s implementation of the changes. However, the project sponsors also told us they were concerned about the potential impact some of the changes—such as locking in funding at entry into Engineering and requiring New Starts and Core Capacity Improvement projects to complete Project Development within 2 years—might have on project sponsors. In addition, while the number of projects in the Capital Investment Grant program has increased by about 70 percent since 2012, project sponsors also told us it was too early to tell the extent to which the MAP-21 changes will help expedite projects through the program. A prevalent theme from our discussions with representatives from 13 project sponsors was that they generally support changes—such as: (1) streamlining the project development process, (2) establishing Core Capacity Improvement projects as a new category of eligible projects, (3) instituting a 2-year requirement for New Starts and Core Capacity Improvement projects to complete Project Development, and (4) revising the evaluation and rating process, that MAP-21 made to the Capital Investment Grant program. Representatives from 9 of the 13 project sponsors we interviewed told us that the changes streamlined the project development process by decreasing the number of time-consuming reviews FTA undertakes or by eliminating what these representatives considered to be burdensome requirements, such as the alternatives analysis requirement under SAFETEA-LU. According to the representatives we interviewed, streamlining should help expedite projects through the program because fewer FTA reviews decrease the amount of work project sponsors need to perform prior to submitting information to FTA for review. According to APTA representatives, the elimination of the alternatives analysis requirement was a particularly positive development for project sponsors because project sponsors devoted significant resources to analyzing alternatives prior to requesting entry into the Capital Investment Grant program. One of the MAP-21 changes that some project sponsors indicated they were supportive of is the addition of Core Capacity Improvement projects. Representatives from one project sponsor said the addition of these projects is a positive development because these projects give project sponsors options to increase the capacity of a system as ridership increases, while two others noted that the addition of Core Capacity Improvement projects expands project eligibility for projects that would likely not have rated favorably under New Starts criteria. According to representatives from one of these project sponsors, these projects expand eligibility because Core Capacity Improvement projects are designed to increase the capacity of existing corridors, not add extensions to an existing system. Figures 4 and 5 provide information on the two Core Capacity Improvement projects we visited for this review— Dallas Area Rapid Transit’s (DART) platform extensions project and Metropolitan Transportation Authority’s (MTA) power improvements project in New York City. Representatives from 6 of 13 project sponsors also indicated that they were generally supportive of the MAP-21 requirement that New Starts and Core Capacity Improvement projects complete Project Development within 2 years. Representatives from two project sponsors told us that requiring project sponsors to complete more work, such as initiating the environmental review process, prior to entering Project Development should help expedite a project’s progress through the program because completing this work decreases the amount of work project sponsors need to complete while in Project Development. Further, representatives from one project sponsor indicated that this change should also deter project sponsors that do not yet have defined projects from entering the program. However, as discussed below, most of the project sponsors also raised some concerns about the 2-year completion deadline. In addition, representatives from 12 of the 13 project sponsors told us that they were generally supportive of the changes MAP-21 made to the evaluation and rating process. Representatives from one project sponsor noted, for example, that the MAP-21 changes have greatly simplified and streamlined the review process and made it more transparent. Representatives from another project sponsor also noted that the changes required FTA to implement more evaluative measures that take into account improvements that benefit existing riders, such as measures designed to reduce travel time, rather than focusing solely on the addition of new riders. However, project sponsors also raised some concerns regarding certain aspects of the MAP-21 changes. Representatives from 11 of the 13 project sponsors told us that requiring New Starts and Core Capacity Improvement projects to complete Project Development activities within 2 years could pose a challenge for projects sponsors—for example, increasing project sponsors’ costs because project sponsors may have to perform more work prior to entering Project Development. These representatives noted that such work is not eligible for pre-award authority under MAP-21. In FTA’s August 2015 policy guidance, FTA acknowledged that it may be challenging for certain proposed projects to complete Project Development within 2 years. However, FTA also acknowledged that the intent of the MAP-21 changes was to help projects make quick progress and not linger in the program, and FTA encouraged project sponsors to perform whatever work they feel necessary prior to requesting entry into Project Development. Representatives from 5 of the 13 project sponsors indicated that locking in Capital Investment Grant program funding at entry into the Engineering phase could be too early in the development process and could pose a challenge because some projects may have yet to develop realistic cost and schedule estimates. According to these representatives, locking in funding at entry into Engineering increases the risk of escalating costs to project sponsors—costs which project sponsors would be responsible for—and is a change compared to under SAFETEA-LU where funding was locked in prior to a project being recommended for a grant agreement. APTA representatives told us that some projects may spend more time in Project Development as a result of this change, in order to help ensure that project sponsors develop more mature cost estimates before locking- in funding. According to FTA, project sponsors, not the federal government, should bear the risk of cost overruns once a project enters Engineering. FTA officials noted that the project sponsor determines when to proceed to Engineering and thus is responsible for ensuring that a project’s cost estimates are supported by sufficient engineering and design work. Representatives from 10 of the 13 project sponsors voiced various concerns regarding some of the changes MAP-21 made to the evaluation and rating process. For example, representatives from 4 project sponsors told us that they thought the ridership measure of the new congestion relief criterion appeared biased toward more mature regions with legacy transit ridership compared to fast-growing regions with emerging transit ridership, or, according to representatives from one of these project sponsors, modes that transport a greater number of passengers, such as light rail projects. FTA has acknowledged limitations with the ridership measure and noted it intends to continue to refine the congestion relief measure over time with input from the transit industry and experience gained through its implementation of the MAP-21 changes. Representatives from 11 of 13 project sponsors indicated that they were generally satisfied with FTA’s implementation of the MAP-21 changes. For example, representatives from four project sponsors said that FTA has made a good effort to listen to and incorporate many of the recommendations offered by project sponsors. APTA representatives similarly told us that FTA has done an excellent job engaging the transit industry in trying to streamline the Capital Investment Grant program. In addition, representatives from 11 of 13 project sponsors said that they were generally supportive of the policy guidance FTA has issued since the enactment of MAP-21. For example, representatives from five project sponsors said FTA’s policy guidance has been comprehensive and useful in explaining how FTA will implement the MAP-21 changes and describing what FTA expects of project sponsors. Furthermore, representatives from all 13 project sponsors said that FTA has continued to provide support to project sponsors prior to entry into Project Development, such as during the application process, as well as throughout the program, as it has worked to implement the MAP-21 changes. For example, project sponsors noted that FTA continues to provide checklists, roadmaps, and technical assistance, in addition to its policy guidance updates and reporting instructions. FTA officials noted that they provide ongoing technical assistance on a routine basis during each of their conversations with project sponsors. Although project sponsors were generally satisfied with FTA’s efforts thus far, they pointed out that not all MAP-21 changes, such as the programs of interrelated projects provisions, have been implemented yet. In addition, representatives from 9 of the 13 project sponsors told us they thought it took a long time for FTA to issue some of its policy guidance. FTA officials noted that by law they are required to issue new policy guidance for the Capital Investment Grant program no less than every 2 years and emphasized that by law they are also required to invite and respond to public comment on their guidance via the Federal Register— requirements that are time-consuming to comply with. Representatives from 11 of the 13 project sponsors also offered various suggestions regarding how FTA could enhance the support it provides project sponsors, such as by providing checklists for different types of projects, such as design-build, operate-maintain, or public-private partnerships or by increasing the number of training opportunities it provides project sponsors. Since 2012, the total number of projects in the Capital Investment Grant program has increased by 70 percent, from 37 projects as of February 2012 to 63 projects as of February 2016, as shown in figure 6. FTA officials, selected project sponsors, and representatives from APTA largely attributed this growth to the fact that under MAP-21, FTA is no longer required to rate proposed projects prior to their entry into the Capital Investment Grant program. While FTA officials told us they view increased participation in the program as an opportunity to help improve public transit in communities across the country, they also said such growth presents challenges, noting that FTA’s resources to review and evaluate projects have largely remained flat over the last several years. Further, they noted that participation in the program by Small Starts projects is increasing—since 2012, Small Starts projects, as a percentage of the total number of projects, increased from about 24 percent to more than 50 percent—and that Small Starts project sponsors typically have little experience constructing major capital projects. Consequently, FTA often provides those project sponsors with greater levels of technical assistance and support. FTA officials told us they have requested additional funding from Congress to address these challenges. They also noted that absent being given additional resources, they cannot spend as much time providing technical assistance or evaluating projects. While the number of projects in the Capital Investment Grant program has increased since the enactment of MAP-21, we found that limited data were available to assess whether projects were progressing through the program more quickly compared to under SAFETEA-LU. For example, at the time of our review only 4 projects had approached the 2-year deadline to complete Project Development. According to FTA officials, 3 of these projects completed the activities required to obtain a project rating from FTA before their 2-year deadlines passed while the third requested to postpone entry into Engineering to complete additional design work and address local funding issues. Representatives from 8 of the 13 project sponsors we spoke with and representatives from APTA also felt that it was is too early to tell the extent to which the MAP-21 changes will help expedite projects through the program. For example, among other things, representatives from these project sponsors told us that while MAP-21 consolidated the number of phases in the development process it was not yet apparent to them how this might affect their projects since they perceived they would still have to complete the same amount of work. In discussing this issue with FTA, officials emphasized that projects were not far enough along for FTA to determine whether the MAP-21 changes are expediting projects through the program. We provided a draft of this report to DOT for review and comment. In its comments, which we have reproduced in appendix II, DOT noted that it is committed to continuing its efforts to improve the Capital Investment Grant program while ensuring that project evaluations provide important information to decision makers. DOT also provided technical comments that we incorporated where appropriate. We are sending copies of this report to interested congressional committees and the Secretary of the Department of Transportation. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions or would like to discuss this work, please contact me at (202) 512-2834 or GoldsteinM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report discusses: (1) the Federal Transit Administration’s (FTA) progress in implementing changes the Moving Ahead for Progress in the 21st Century Act (MAP-21) made to the Capital Investment Grant program and (2) how selected project sponsors view the MAP-21 changes and FTA’s implementation of those changes. We focused our work on selected statutory requirements contained in MAP-21 that were not significantly altered or repealed by the Fixing America’s Surface Transportation Act (FAST Act). To address our objectives, we reviewed the relevant provisions of MAP- 21, the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU), and the FAST Act. We also reviewed FTA’s policy guidance; other pertinent FTA documents related to the program, such as FTA’s annual reports to Congress; and our body of work on FTA’s Capital Investment Grant program. In addition, we interviewed FTA officials, representatives of the American Public Transportation Association, and selected project sponsors. Specifically, we interviewed representatives from 13 project sponsors representing 17 of 52 projects participating in the program as of February 2015 and conducted a content analysis of the interviews with project sponsors to identify and summarize themes that emerged during our discussions. The information obtained from our interviews with project sponsors is not generalizable to all project sponsors but provides insight into project sponsors’ views of the MAP-21 changes thus far. We also visited New York City and Dallas, Texas, to tour the sites of two proposed Core Capacity Improvement projects. The project sponsors we contacted and the locations we visited were selected based on a number of factors, the primary being previous project experience in FTA’s Capital Investment Grant program under SAFETEA-LU, which provided a basis to compare changes made by MAP-21. These project sponsors represent 7 New Starts projects, 8 Small Starts projects, and 2 Core Capacity Improvement projects, as well as different rail modes (heavy rail, light rail, commuter rail) and both bus rapid transit and streetcar projects, as shown in table 2. We conducted this performance audit from July 2015 through April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact above, key contributors to this report included Brandon Haller (Assistant Director), Andrew Burton, Geoffrey Hamilton, Wesley A. Johnson, Delwen Jones, Hannah Laufe, Malika Rice, and Andrew Stavisky. Public Transportation: Multiple Factors Influence Extent of Transit- Oriented Development. GAO-15-70. Washington, D.C.: November 18, 2014. Public Transit: Length of Development Process, Cost Estimates, and Ridership Forecasts for Capital-Investment Grant Projects. GAO-14-472. Washington, D.C.: May 30, 2014. Public Transit: Funding for New Starts and Small Starts Projects, October 2004 through June 2012. GAO-13-40. Washington, D.C.: November 14, 2012. Bus Rapid Transit: Projects Improve Transit Service and Can Contribute to Economic Development. GAO-12-811. Washington, D.C.: July 25, 2012. Public Transportation: Requirements for Smaller Capital Projects Generally Seen as Less Burdensome. GAO-11-778. Washington, D.C.: August 2, 2011. Public Transportation: Use of Contractors Is Generally Enhancing Transit Project Oversight, and FTA is Taking Actions to Address Some Stakeholder Concerns. GAO-10-909. Washington, D.C.: September 14, 2010. Public Transportation: Federal Project Approval Process Remains a Barrier to Greater Private Sector Role and DOT Could Enhance Efforts to Assist Project Sponsors. GAO-10-19. Washington, D.C.: October 29, 2009. Public Transportation: Better Data Needed to Assess Length of New Starts Process, and Options Exist to Expedite Project Development. GAO-09-784. Washington, D.C.: August 6, 2009. Public Transportation: New Starts Program Challenges and Preliminary Observations on Expediting Project Development. GAO-09-763T. Washington, D.C.: June 3, 2009. Public Transportation: Improvements Are Needed to More Fully Assess Predicted Impacts of New Starts Projects. GAO-08-844. Washington, D.C.: July 25, 2008. Public Transportation: Future Demand Is Likely for New Starts and Small Starts Programs, but Improvements Needed to the Small Starts Application Process. GAO-07-917. Washington, D.C.: July 27, 2007. Public Transportation: New Starts Program Is in a Period of Transition. GAO-06-819. Washington, D.C.: August 30, 2006. Public Transportation: Preliminary Information on FTA’s Implementation of SAFETEA-LU Changes. GAO-06-910T. Washington, D.C.: June 27, 2006. Public Transportation: Opportunities Exist to Improve the Communication and Transparency of Changes Made to the New Starts Program. GAO-05-674. Washington, D.C.: June 28, 2005. Mass Transit: FTA Needs to Better Define and Assess Impact of Certain Policies on New Starts Program. GAO-04-748. Washington, D.C.: June 25, 2004. Mass Transit: FTA Needs to Provide Clear Information and Additional Guidance on the New Starts Ratings Process. GAO-03-701. Washington, D.C.: June 23, 2003. Mass Transit: FTA’s New Starts Commitments for Fiscal Year 2003. GAO-02-603. Washington, D.C.: April 30, 2002. Mass Transit: FTA Could Relieve New Starts Program Funding Constraints. GAO-01-987. Washington, D.C.: August 15, 2001. Mass Transit: Implementation of FTA’s New Starts Evaluation Process and FY 2001 Funding Proposals. GAO/RCED-00-149. Washington, D.C.: April 28, 2000.
FTA's Capital Investment Grant program provides roughly $2 billion in appropriated funds each year to help states, cities, and localities plan and build new or extensions to existing fixed-guideway transit systems. Under this program, project sponsors—usually local transit agencies—have typically applied for their projects to receive federal funding as either a New Starts or a Small Starts project. In 2012, MAP-21 created a new category of eligible projects called Core Capacity Improvement projects and also revised the process proposed projects must follow to be eligible for and receive federal funding. MAP-21 included a provision for GAO to biennially review FTA's and the Department of Transportation's implementation of this program. This report discusses: (1) FTA's progress in implementing changes to the program required by MAP-21 and (2) how selected project sponsors view the MAP-21 changes and FTA's implementation of those changes. To conduct this review, GAO reviewed the relevant provisions of pertinent laws and FTA's policy guidance, interviewed FTA officials and representatives from 13 project sponsors representing 17 of 52 projects participating in the program, and visited the sites of two Core Capacity Improvement projects. Project sponsors and locations visited were selected based on previous experience in the program, among other things. In written comments, DOT emphasized its commitment to improve and streamline the Capital Investment Grant program. The Federal Transit Administration (FTA) has implemented most of the key changes the Moving Ahead for Progress in the 21st Act (MAP-21) made to the Capital Investment Grant program, which helps fund investments in new public transit systems or extensions to existing systems. Projects funded under this program fall into different categories, depending on the total project's cost and the amount of federal funding requested. For example, under MAP-21, New Starts projects had capital costs that were $250 million or greater while Small Starts projects had capital costs that were less than $250 million. As required by MAP-21, FTA has issued guidance outlining the new review and evaluation process for New Starts and Small Starts projects—as well as Core Capacity Improvement projects, which is a new category of eligible projects MAP-21 created and which are designed to increase the capacity of an existing system. In addition, FTA has informed project sponsors how they can pre-qualify for a satisfactory rating based on the characteristics of their projects. FTA officials said they plan to address the remaining requirements, such as completing the rulemaking to fully implement the MAP-21 provisions, over the next 2 years. The 13 project sponsors GAO contacted—representing 7 New Starts projects, 8 Small Starts projects, and 2 Core Capacity Improvement projects—were generally supportive of the changes MAP-21 made to the Capital Investment Grant program, as well as FTA's implementation of the changes. Representatives from 9 of 13 project sponsors indicated that the MAP-21 changes streamlined the Capital Investment Grant program's project development process, such as by reducing the number of time-consuming FTA reviews. Of the three project sponsors that indicated they had an opinion on the addition of Core Capacity Improvement projects as a new category of projects, all were supportive, with representatives from one noting, for example, that this change gives them options to increase the capacity of existing systems as ridership increases. Such projects could include lengthening rail platforms to accommodate additional train cars or to reduce platform overcrowding. Also, representatives from 11 of 13 project sponsors supported FTA's implementation efforts, noting, for example, that FTA has taken steps to listen to and incorporate many of the recommendations offered by project sponsors in implementing the MAP-21 changes. While project sponsors raised some concerns about the potential impact certain changes—such as limiting the amount of time New Starts and Core Capacity Improvement projects can spend in Project Development—might have on project sponsors in the future, they also acknowledged that not all the MAP-21 changes have been implemented yet. While participation in the program has increased substantially—by 70 percent—since the enactment of MAP-21, both project sponsors and FTA officials pointed out that it is too early to tell what impact the changes will ultimately have on the Capital Investment Grant program—including if the changes will help expedite projects through the program.
Generally, employers can provide health coverage in two ways. They can purchase coverage from health insurers, such as local Blue Cross and Blue Shield plans; other private insurance carriers; or managed care plans, such as health maintenance organizations. Alternatively, they can self-fund their plans—that is, they assume the risk associated with paying directly for at least some of their employees’ health care costs—and typically contract with an insurer or other company to administer benefits and process claims. When small employers offer health coverage, most tend to purchase insurance rather than self-fund. Only about 12 percent of the establishments at firms with fewer than 50 employees that offered coverage in 2001 had a self-funded plan, compared with about 58 percent of the establishments at firms with 50 or more employees. Moreover, about 76 percent of the establishments at the largest firms—those with 500 or more employees—offered at least one self-funded plan. States regulate the insurance products that many employers purchase. Each state’s insurance department enforces the state’s insurance statutes and rules. Among the functions state insurance departments typically perform are licensing insurance companies, managed care plans, and agents who sell these products; regulating insurers’ financial operations to ensure that funds are adequate to pay policyholders’ claims; reviewing premium rates; reviewing and approving policies and marketing materials to ensure that they are not vague and misleading; and implementing consumer protections such as those relating to appeals of denied claims. The federal government regulates most private employer-sponsored pension and welfare benefit plans (including health benefit plans) as required by the Employee Retirement Income Security Act of 1974 (ERISA). These plans include those provided by an employer, an employee organization (such as a union), or multiple employers through a multiple employer welfare arrangement (MEWA). DOL is primarily responsible for administering Title I of ERISA. Among other requirements, ERISA establishes plan reporting and disclosure requirements and sets fiduciary standards for the persons who manage and administer the plans. These requirements generally apply to all ERISA-covered employer- sponsored health plans, but certain requirements vary depending on the size of the employer or whether the coverage is through an insurance policy or a self-funded plan. In addition, ERISA generally preempts states from directly regulating employer-sponsored health plans (while maintaining states’ ability to regulate insurers and insurance policies). Therefore, under ERISA, self-funded employer group health plans generally are not subject to the state oversight that applies to the insurance companies and health insurance policies. Prior to 1983, a number of states attempted to subject MEWAs to state insurance law requirements, but MEWA sponsors often claimed ERISA-plan status and federal preemption. A 1983 amendment to ERISA made it clear that health and welfare benefits provided through MEWAs were subject to both federal and state oversight. The federal and state governments now coordinate the regulation of MEWAs, with states having the primary responsibility to regulate the fiscal soundness of MEWAs and to license their operators and DOL enforcing ERISA’s requirements. DOL and the states identified 144 unauthorized entities from 2000 through 2002. Many of these entities marketed their products in more than one state, and some operated under more than one name or with more than one affiliated entity. These entities operated most often in southern states. The number of such entities newly identified each year grew from 31 in 2000 to 60 in 2002. About 80 percent of these entities characterized themselves as one of four arrangements or some combination of the four. In addition, some states reported that discount plans misrepresented their products as health insurance. DOL and 42 states identified 144 unique unauthorized entities from 2000 through 2002. Many of these entities marketed their products in more than one state, and some operated under more than one name or with more than one affiliated entity. This likely represents the minimum number of unauthorized entities operating from 2000 through 2002 because some states did not report on entities that they were still investigating. Of the 144 unique entities, the states identified 77 entities that DOL did not, DOL identified 40 that the states did not, and both the states and DOL identified another 27. Unauthorized entities identified by DOL and the states from 2000 through 2002 operated in every state, ranging from 5 entities in Delaware and Vermont to 31 in Texas. (See fig. 1.) Some of the unauthorized entities operated in more than one state so the total number of entities identified by DOL and the states exceeds the total of 144 unique entities. Unauthorized entities were concentrated in certain states and regions. Seven states had 25 or more entities that operated during this period; 5 of these states were located in the South. In addition to the 31 entities in Texas, there were 30 in Florida, 29 each in Illinois and North Carolina, 28 in New Jersey, 27 in Alabama, and 25 in Georgia. The number of unauthorized entities newly identified by DOL and the states each year almost doubled from 2000 through 2002. The number increased significantly from 2000 to 2001, and it continued to increase from 2001 to 2002. (See fig. 2.) Several DOL officials, state officials, and experts pointed to rapidly increasing health care costs and the weak economy as two factors contributing to the recent growth in the number of identified unauthorized entities. They suggested that the pressure of rising premiums and decreasing revenues may have increased employers’ demand for more affordable employee health benefits, particularly among small employers, and thereby created an environment where unauthorized entities could spread. From 2000 through 2002, firms with fewer than 50 workers experienced an average annual increase in their workers’ health benefits of about 13.3 percent, whereas firms with 50 or more workers experienced an average annual increase of 10.9 percent. The United States economy also showed signs of weakness in the third quarter of 2000 when it experienced growth of 0.6 percent, and suffered a recession in 2001. The economy’s subsequent recovery in 2002 was marked by moderate economic growth but rising unemployment. Negative or weak growth in employers’ revenues, compounded by rising premiums particularly for small employers, created an attractive environment for unauthorized entities, as small employers and others sought cheaper employee health benefit options. About 80 percent of the unauthorized entities identified by DOL and the states characterized themselves as associations, professional employer organizations, unions, single-employer ERISA plans, or some combination of these arrangements. The operators of these entities often characterized the entities as one of these common types to give the appearance of being exempt from state regulation, but often states found that they actually were subject to state regulation as insurance arrangements or MEWAs. Under ERISA, both states and the federal government regulate MEWAs, with states focusing on regulating the fiscal soundness of MEWAs and licensing their operators and DOL enforcing ERISA’s requirements. Specifically, as shown in table 1, 27 percent of the entities identified by the states and DOL characterized themselves as associations in which employers or individuals bought health benefits through existing associations, or through newly created associations established by the unauthorized entities. For example, Employers Mutual, LLC, an entity that operated in 2001, sold coverage through an existing association. Employers Mutual also created 16 associations as vehicles for selling its products. (See app. II for a more detailed discussion of Employers Mutual, LLC.) In addition, 26 percent of the entities identified were professional employer organizations, also known as employee leasing firms, which contracted with employers to administer employee benefits and perform other administrative services for contract employees. Another 9 percent of the entities identified claimed to be union arrangements that would be exempt from state regulation. However, they lacked legitimate collective bargaining agreements and were therefore subject to state oversight. Eight percent of the entities identified characterized themselves as single- employer ERISA plans and claimed to be administering a self-funded plan for a single employer. Such plans, when administered with funds from one employer for the benefit of that employer’s workers, are exempt from state insurance regulation under ERISA. However, assets for several employers were commingled in these entities, making them MEWAs subject to state regulation. Some discount plans, in which the purchaser receives a discount from the full cost of certain health care services from participating providers, were misrepresented as insurance. Unlike legitimate insurance, discount plans do not assume any financial risk nor do they pay any health care claims. Instead, for a fee they provide a list of health care providers that have agreed to provide their services at a discounted rate to participants. In response to our survey, 40 states reported that they were aware that discount plans were marketed in their state, and 14 states reported that some discount plans were inappropriately marketed as health insurance products in some manner. Among these 14 states, 8 reported that the inappropriately marketed discount plans targeted small employers. While discount plans are not problematic as long as purchasers clearly understand the plans, these 14 states reported that some discount plans were marketed as health insurance with terms or phrases such as “medical plan,” “health benefits,” or “pre-existing conditions immediately accepted.” (See app. III for more information on discount plans.) At least 15,000 employers, including many small employers, purchased coverage from unauthorized entities, affecting more than 200,000 policyholders from 2000 through 2002. The states reported that more than half of the organizations they identified frequently targeted their health benefits to small employers. At the time of our 2003 survey, DOL and states reported that the 144 entities had not paid at least $252 million in medical claims, and only about 21 percent of these claims, about $52 million, had been recovered on behalf of those covered by these entities. Ten of the 144 entities covered the majority of employers and policyholders and accounted for almost one half of unpaid claims. Based on our survey of states and information from DOL, we estimate that unauthorized entities sold coverage to at least 15,158 employers. The states reported that more than half of the entities they identified targeted their health benefits to small employers. Furthermore, unauthorized entities covered at least 201,949 policyholders across the United States from 2000 through 2002. The number of individuals covered by unauthorized entities was even greater than the number of policyholders covered because a policyholder could be an employer or an individual with dependents. Therefore, any one policyholder could represent more than one individual. At the time of our 2003 survey, DOL and state officials reported that unauthorized entities had not paid at least $252 million in medical claims. This represents the minimum amount of unpaid claims associated with these entities identified from 2000 through 2002 because in some cases DOL and the states did not have complete information on unpaid claims for the entities they reported to us. Federal and state governments reported that about 21 percent of unpaid claims had been recovered from entities identified from 2000 through 2002—$52 million of $252 million. These recoveries could include assets seized from unauthorized entities that had been shut down or frozen from other uses. Licensed insurance agents have also settled unpaid claims voluntarily or through state or court action. However, the amount of unpaid claims recovered could grow over time as ongoing investigations are resolved. Investigations of unauthorized entities are complex and require significant resources and time to thoroughly probe because operators often maintain poor records and hide assets, sometimes offshore. DOL and state officials explained that by the time they become aware of an unauthorized entity—often when medical claims are not being paid—the entity is sometimes on the verge of bankruptcy and may have few remaining assets with which to pay claims. Thus, while some additional assets may be recovered from the entities identified from 2000 through 2002, it is likely that many of the assets will remain unrecovered. Ten large entities identified by DOL and the states covered a majority of employers and policyholders and accounted for nearly half of unpaid claims. Of the 144 unique entities, 10 covered about 64 percent of the employers and about 56 percent of the policyholders. They also accounted for 46 percent of the unpaid claims. (See table 2.) Some of these large entities grew rapidly and existed for short periods. For example, from January through October 2001, Employers Mutual enrolled over 22,000 policyholders; covered about 1,100 employers; and amassed over $24 million in unpaid claims, none of which have been paid. States and DOL took generally similar actions to identify unauthorized entities and prevent them from operating, but they followed different approaches to stop these entities’ activities. States and DOL often relied on the same method to learn of the entities’ operations—through consumer complaints. In addition, NAIC played an important role in the identification process by helping to coordinate and distribute state and federal information on these entities. To stop the operations of these entities, state agencies issued cease and desist orders, while DOL took action through the federal courts. Both state and DOL officials said that increased public awareness was important to help prevent such entities from continuing to operate. States and DOL identified unauthorized entities through similar methods. While states reported that most often they became aware of the entities’ operations from consumers’ complaints, they also received complaints about these entities from several other sources, such as agents, employers, and providers. DOL also often learned of these entities through consumer complaints. In addition to information obtained through NAIC, state insurance departments and EBSA regional offices relied on each other to learn of the entities’ activities. States identified entities operating within their borders through several different methods, including complaints from consumers, information coordinated by NAIC, information from DOL, and a combination of these and other methods. States most often identified unauthorized entities operating within their borders through consumer complaints. (See table 3.) In addition to consumer complaints, states relied on other sources to help identify the unauthorized entities, with NAIC being the second most frequent source of information. In December 2000, NAIC started to share information from state and federal investigators on these entities with all states and DOL. In about 71 percent of the 98 cases where states reported using the NAIC information to identify unauthorized entities, they also reported using information from one or more other sources—most often consumer complaints. In addition, DOL and insurance agents, either alone or in combination with other identification methods, helped states identify the entities. For example, DOL submitted quarterly reports to NAIC that identified all open civil investigations, the individuals being investigated, and the EBSA office conducting the investigations. NAIC shared this and other information from EBSA regional offices with state investigators throughout the country. Federal investigators also often identified unauthorized entities through consumers’ complaints. According to EBSA officials, consumers call DOL’s customer service lines when they have complaints or questions and speak with benefits advisers about the employer-based health benefits plans in which they are enrolled. Regional directors in EBSA’s Atlanta, Dallas, and San Francisco offices said they open investigations when benefit advisers cannot resolve the complaints. Federal investigators also relied on states to help identify unauthorized entities. An EBSA headquarters official told us that states usually alerted federal investigators to the entities operating within their regions. The directors of the three EBSA regional offices we interviewed said they had received referrals from state insurance department officials within their regions. States generally issued cease and desist orders to stop the activities of unauthorized entities. In contrast, DOL obtained injunctive relief through the federal courts by obtaining temporary restraining orders (TRO) or preliminary or permanent injunctions to stop unauthorized entities’ activities. DOL often relied on states to stop unauthorized entities through cease and desist orders while it conducted investigations, usually in multiple states, to obtain the evidence needed to stop these entities’ activities nationwide through the courts. After identifying the unauthorized entities, the primary mechanism states used to stop them from continuing to operate was the issuance of cease and desist orders. Generally, these cease and desist orders told the operators of the entities, and affiliated parties, to stop marketing and selling health insurance in that state and in some cases explicitly established their continuing responsibility for the payment of claims and other obligations previously incurred. About 71 percent of the states (30 of 42 states) that reported unauthorized entities operating within their borders from 2000 through 2002 issued at least one cease and desist order to stop an entity’s activities during that time. The number of cease and desist orders issued by each of the 30 states ranged from 1 to 11, averaging about 4 per state. Alabama, Illinois, and Texas, three states in which more than 25 unauthorized entities operated, reported issuing the most cease and desist orders. A cease and desist order applies to activities only within the state that issues the order. Therefore, in several cases, more than one state issued a cease and desist order against the same entity. For example, 14 states reported that they each issued a cease and desist order to stop Employers Mutual’s operations within their borders. States issued a total of 108 cease and desist orders that affected 41 of the 144 unique entities nationwide. About 58 percent of policyholders and nearly half of unpaid claims were associated with these 41 entities. State insurance departments generally had the authority to issue cease and desist orders. The insurance department officials we interviewed in Colorado, Florida, Georgia, and Texas said that the insurance commissioner or holder of an equivalent position could issue a cease and desist order when there was enough evidence to support the need. From 2000 through 2002, these four states told us that they issued 25 cease and desist orders against about 58 percent of the entities they identified. According to these insurance department officials, the time needed to obtain a cease and desist order varied depending on such factors as the complexity of the entity to be stopped, a state’s resources for conducting investigations, and whether others had already conducted investigations. States typically shared information on the cease and desist orders they issued with NAIC. NAIC has developed a system to capture information on various state insurance regulatory actions, including cease and desist orders issued. States have access to the information reported through this system. States took other actions against the entities, sometimes in conjunction with issuing cease and desist orders. For example, in 48 instances states responding to our survey reported that they took actions against or sought relief from the agents who sold the entities’ products, including fining them, revoking their licenses, or ordering them to pay outstanding claims. States also reported that they took actions against the entity operators in 25 instances and filed cases in court in 14 instances. DOL can take enforcement action to stop an unauthorized entity’s activities through the federal courts—that is, by seeking injunctive relief and, in some cases, pursuing civil and criminal penalties. An injunction is an order of a court requiring one to do or refrain from doing specified acts. Injunctive relief sought by DOL against unauthorized entities includes TROs, which may be issued without notice to the affected party and are effective for up to 10 days; preliminary injunctions, which may be issued only with notice to the affected party and the opportunity for a hearing; and permanent injunctions, which are granted after a final determination of the facts. DOL’s enforcement actions apply to all states affected by the entity. To obtain a TRO, DOL must offer sufficient evidence to support its claim that an ERISA violation has occurred and that the government will likely prevail on the merits of the case. Documenting that a fiduciary breach took place can be difficult, time-consuming, and labor-intensive because DOL investigators often must work with poor or nonexistent records, uncooperative parties, and multiple trusts and third-party administrators. As of December 2003, DOL had obtained TROs against three entities for which investigations were opened from 2000 through 2002. In two of these cases, DOL also obtained preliminary injunctions and in one case a permanent injunction. (See table 4.) Each of these actions affected people in at least 41 states. These three entities combined affected an estimated 25,000 policyholders and accounted for about $39 million in unpaid claims. DOL and state officials told us that they coordinate their investigations and other efforts. For example, one EBSA regional director said his office has met with the states in the region and, when needed, provides information to help states obtain cease and desist orders to stop unauthorized entities. Furthermore, DOL officials said that they rely on the states to obtain cease and desist orders to stop these entities’ activities in individual states while conducting the federal investigations. For example, DOL and states coordinated and cooperated extensively during the investigation of Employers Mutual and provided mutual support in obtaining cease and desist orders and the TRO. Several states issued cease and desist orders against this entity before DOL obtained the TRO. In addition, DOL officials said DOL does not take enforcement action in some cases where (1) states have successfully issued cease and desist orders to protect consumers because no more action is needed to prevent additional harm, (2) the entity was expected to pay claims, or (3) the entity ceased operations. From 2000 through 2002, EBSA opened investigations of 69 entities. These investigations involved 13 entities in 2000, 31 in 2001, and 25 in 2002. Overall, EBSA reported 67 civil and 17 criminal investigations opened from 2000 through 2002 involving the 69 entities. Civil investigations of these entities focused on ERISA violations, particularly breaches of ERISA’s fiduciary requirements, while criminal investigations focused on such crimes as theft and embezzlement. In some cases, unauthorized entities can face simultaneous civil and criminal investigations. As of August 2003, EBSA was continuing to investigate 51 of these entities. As a result, further federal actions remain possible. For example, in addition to the three investigations that had yielded TROs or injunctions, EBSA had referred four other case investigations to the DOL Solicitor’s Office for potential enforcement action and obtained subpoenas in five cases. To help prevent unauthorized entities from continuing to operate, officials in the insurance departments we interviewed in four states—Colorado, Florida, Georgia, and Texas—took various actions to alert the public and to inform insurance agents about these entities. NAIC developed model consumer and agent alerts to help states increase public awareness. DOL primarily targeted its prevention efforts to employer groups and small employers. The states and DOL emphasized the need for consumers and employers to check the legitimacy of health insurers before purchasing coverage, thus helping to prevent unauthorized entities from continuing to operate. Insurance department officials we interviewed in four states took various actions to prevent unauthorized entities from continuing to operate. Each of these states issued news releases to alert the public about these entities in general and to publicize the enforcement actions they took against specific entities. To help states increase public awareness, NAIC developed a model consumer alert in the fall of 2001, which it distributed to all the states and has available on its Web site. (See app. IV.) The four states’ insurance departments also maintained Web sites that allow the public to search for those companies authorized to conduct insurance business within their borders. These states have also taken other actions to increase public awareness. For example, in April 2002, Florida released a public service announcement to television news markets throughout the state to warn about these entities. In addition, in the spring of 2003, Florida placed billboards throughout the state to warn the public through its “Verify Before You Buy” campaign. (See fig. 3.) In addition to increasing public awareness, the four state insurance departments alerted insurance agents about unauthorized entities. Using bulletins, newsletters, and other methods, these states warned agents about these entities, the implications associated with selling their products, and the need to verify the legitimacy of all entities. Georgia, for example, sent a warning to insurance agents in May 2002, which highlighted the characteristics of these entities, reminded agents that they could lose their licenses and be held liable for paying claims when the entities do not pay, and noted that the state insurance department Web site contained a list of all licensed entities. NAIC also developed a model agent alert to help agents identify these entities. A national association representing agents and brokers and many state insurance departments distributed this alert. The Web sites for the four states’ insurance departments contained information on the enforcement actions they took against agents. The Texas insurance department’s Web site, for example, provided the disciplinary actions that the state took as of August 2003 against individuals who acted as agents for unauthorized insurers. These agents were fined, ordered to make restitution, lost their licenses, or faced a combination of some or all of these actions. DOL primarily focused its efforts to prevent unauthorized entities from continuing to operate on employer groups, small employers, and the states. To help increase public awareness about these entities, on August 6, 2002, the Secretary of Labor notified over 70 business leaders and associations, including the U.S. Chamber of Commerce and the National Federation of Independent Business, about insurance tips that the department had developed and asked them to distribute the tips to small employers. Consistent with the advice states provided, among other things, the tips advised small employers to verify with a state insurance department whether any unfamiliar companies or agents were licensed to sell health benefits coverage. (See app. V.) Also, the three EBSA regional offices we reviewed had initiated various activities within the states in their regions. For example, EBSA’s Atlanta regional office sponsored conferences that representatives from 10 states and NAIC attended. Federal and state representatives discussed ERISA-related issues and their investigations at these conferences. Furthermore, since 2000, DOL initiated several technical assistance efforts to help states and others better understand ERISA-related issues. These efforts are intended to help prevent unauthorized entities from avoiding state regulation. We provided a draft of this report to DOL, NAIC, and the four state insurance departments (Colorado, Florida, Georgia, and Texas) whose officials we interviewed. DOL, NAIC, Florida, and Texas provided written comments on the draft. Colorado and Georgia did not provide comments on the draft. DOL identified initiatives it has taken to improve coordination with states and law enforcement agencies and highlighted its criminal enforcement actions. We modified the report to include additional examples of this coordination, such as the Atlanta EBSA regional office’s meetings with states and coordination on investigation and enforcement actions. We recognize other activities are underway, such as making available electronic information that MEWAs are required to report to EBSA and sharing information with law enforcement agencies, but it was not the purpose of this report to identify the full range of DOL activities related to MEWAs and coordination with states on employer benefit and insurance issues. Although DOL also provided additional information on its criminal enforcement actions, we did not include these data in the report because these enforcement actions did not all relate to the investigations of the 69 entities DOL opened from 2000 through 2002 that were the focus of our analysis. DOL’s comments are reprinted in appendix VI. NAIC’s written comments provided additional information on efforts it has taken to increase awareness of unauthorized insurance and acknowledged the difficulties associated with determining the number of unique unauthorized entities. NAIC noted that it began a national media campaign on unauthorized insurance that will run from January through June 2004 and, as part of the campaign, it developed a new brochure for consumers entitled “Make Sure Before You Insure.” In addition, NAIC is updating its ERISA Handbook, which contains basic information about ERISA and its interaction with state law, to highlight different types of unauthorized entities and to provide guidance to state regulators on recognizing and shutting down these entities. Because NAIC recently initiated its media campaign and its scope was continuing to develop at the time we completed our work, we did not incorporate this information in the body of the report. In addition to the report’s description of consumer and agent alerts that NAIC had distributed, NAIC also noted that in June 2003 it distributed a model regulatory alert to all its members that emphasized the need for third-party administrators and others to ensure that they do not become unwitting supporters of these entities. NAIC also suggested that the report include a more comprehensive list of state insurance regulation and laws. While the draft report included key functions that state insurance departments perform in regulating health insurance, it was beyond the scope of this report to comprehensively address the extent and variety of state insurance requirements affecting health insurance. We did, however, add a reference in the final report to consumer protection laws that states are responsible for enforcing. Finally, NAIC commented that many entities may be operating under multiple names, which makes it difficult to precisely count the number of such entities. As discussed in the draft report, our estimates of the number of unique unauthorized entities attempted to account for this complexity by consolidating information from multiple states or DOL where there was information to link entities. We added additional information to the report’s methodology to highlight the steps we took to determine the number of these entities. Written comments from the Florida Department of Financial Services noted that there has been cooperation among the federal and state governments in addressing the problems associated with unauthorized entities, stating that no state or federal agency effort could succeed without regulators sharing information. In addition, Florida stressed how unauthorized entities rely on associated entities and persons to succeed and proliferate. For example, unauthorized entities used licensed and unlicensed reinsurers, third-party administrators, and agents to help defraud the public. Florida indicated that these structures made it difficult for states to detect the entities. In its written comments, the Texas Department of Insurance suggested that we further elaborate on legal actions states have taken against unauthorized entities. In addition to issuing cease and desist orders, Texas stressed that states have (1) used restraining orders and injunctions, similar to DOL, to stop unauthorized entities, (2) assessed penalties against operators of these entities, and (3) taken actions against agents who sold unauthorized products. For example, in 2002, Texas placed a major entity into receivership, seized its assets, and initiated actions to recover more assets. In 2003, Texas finalized penalties against the operators of Employers Mutual. In addition, Texas explained that states have devoted significant resources to penalizing agents who have accepted commissions from unauthorized entities. In addition to actions we reported, the Texas Department of Insurance indicated that it has taken other steps to increase consumer awareness of these entities. For example, Texas said that it had issued a bulletin to all health insurance companies and claims administrators warning about unauthorized entities and provided public information to various news organizations, assisting them with their reporting on these entities. Texas also highlighted the criminal investigations the state has conducted and wrote that its insurance fraud division has referred cases to DOL and others. While the report includes illustrative examples of key legal actions, including actions against agents involved with unauthorized entities, and public awareness efforts taken by the states, we primarily focused on the more common actions taken by states as reported in response to our survey. DOL and the other reviewers also provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. We will then send copies to the Secretary of Labor, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please call me at (202) 512-7118 or John E. Dicken at (202) 512-7043 if you have additional questions. Joseph A. Petko, Matthew L. Puglisi, Rashmi Agarwal, George Bogart, and Paul Desaulniers were major contributors to this report. To identify the number of unique unauthorized entities nationwide from 2000 through 2002 and to obtain information, such as the number of employers covered and unpaid claims, pertaining to each of these entities, we obtained and analyzed data from state and federal sources. We obtained state-level data through a survey we sent to officials located in insurance departments or equivalent offices in all 50 states and the District of Columbia and federal-level data from the Department of Labor’s (DOL) Employee Benefits Security Administration (EBSA). We also obtained information from the states on a related type of problematic arrangement— discount plans that sometimes are misrepresented as health insurance. To obtain data on unauthorized entities and other types of problematic plans in each state, we e-mailed a survey to individuals identified by the National Association of Insurance Commissioners (NAIC) as each state insurance department’s multiple employer welfare arrangement (MEWA) contact. A NAIC official indicated that these individuals would be the most knowledgeable in the states on the issue of unauthorized entities. All the states responded to our survey. Part I of the survey asked for selected data elements on the entities. We asked the states to use the following definition: “an unauthorized health benefits plan is defined as an entity that sold health benefits, collected premiums, and did not pay or was likely not to pay some or all covered claims. These entities are also known as health insurance scams.” First, we asked officials in each state to tell us how many of these entities covering individuals in the state they identified during each of 3 calendar years— 2000, 2001, and 2002. For each entity the state identified during the 3-year period, we requested information such as the (1) number of employers covered, (2) number of policyholders covered, (3) total amount of unpaid claims in the state, and (4) amount of unpaid claims recovered. We also obtained information on the type of the entity, how the state identified the entity, and what actions the state took regarding the entity. Part II of the survey collected information on other types of problematic plans— including discount plans—and whether these other types of plans targeted small employers. To determine the number of entities states identified in each calendar year, we relied on states to determine at what stage of their investigative process they would deem an entity to be unauthorized. Therefore, states could have reported both those entities they determined were unauthorized after completing an investigation and against which they took formal action and those entities still being investigated and for which no formal action had been taken. To obtain federal-level data on unauthorized entities, we asked EBSA to provide data from the civil and criminal case investigations it opened from 2000 through 2002 involving these entities. To identify which of its civil and criminal investigations of employer-based health benefits plans fell within the scope of our research, we asked EBSA to use a similar definition of unauthorized entities as included on our state survey. For each of the civil and criminal investigations of these entities EBSA opened during the 3-year period, we asked EBSA to provide the same type of data about unauthorized entities that we requested on the survey we sent to all the states. In addition, we asked EBSA to identify all the states that were affected by each entity it was investigating—information that states could not easily provide. Furthermore, where EBSA was conducting both civil and criminal investigations of an entity, we asked it to report that entity only one time. Because EBSA and states provided the names of entities that were still under investigation at the time of our survey, we agreed not to report the names of any of these entities unless the investigation had already been made public. Therefore, we report only the names of three unauthorized entities for which DOL had issued media releases when it obtained temporary restraining orders (TRO) or injunctions to stop their activities. To determine the number of unauthorized entities that operated from 2000 through 2002, we analyzed information on the entities identified by the states and investigated by EBSA. Specifically, we analyzed the names of 288 entities that states identified and 69 entities that EBSA investigated. In many cases, two or more states or EBSA reported the name of the same entity. We compared the entity names and, using several data sources—for example, copies of the cease and desist orders states provided to NAIC, interviews of state officials, survey responses that included multiple names for the same entity, and media reports—and our judgment regarding similar names, consolidated them into a count of unique entities. Based on this analysis, we consolidated the 357 entity names identified or investigated by the states and EBSA to 144 unique unauthorized entities nationwide, including 77 entities identified only by the states; 40 entities investigated only by EBSA; and 27 entities identified by one or more states and also investigated by EBSA. To identify the total number of employers covered, policyholders covered, amount of unpaid claims, and recoveries on the claims for the 144 unique unauthorized entities identified nationwide from 2000 through 2002, we consolidated the data provided by the states and EBSA. To develop unduplicated counts for each of the data elements, we developed a data protocol. We matched the names of the states that reported each of these 27 entities to the names of the states in which EBSA reported that these entities operated. Because the EBSA data generally were more consistent and comprehensive—particularly since not all states reported on some of the multistate entities reported by EBSA—we used the EBSA-reported data rather than the state-reported data for each element. However, if a state reported an entity to us and EBSA did not report that it was aware that the entity operated in that state, we included that state’s data. Also, where EBSA data were missing for a data element, we included state-reported data in our totals when provided. To identify the year that each of the 144 unauthorized entities was identified, we used the earliest year either EBSA or a state reported for when each of the 144 entities was identified. To determine how many entities operated in each state, we combined the EBSA data and the data reported by the states. Because some of the entities EBSA investigated were nationwide or were in multiple states, the number of entities we report as operating in each state is greater than the number of entities states directly identified on our survey. For example, while nine states reported to us that they did not identify any entities from 2000 through 2002, EBSA indicated that several of the entities it was investigating operated in these states. The data we report for each of the elements—the number of employers covered, policyholders covered, amount of unpaid claims, and recoveries on the claims—may be underestimated. EBSA and some states reported that some of the data were unknown for each of these elements. In addition, while the states provided most of the requested data, they did not provide some of the data for some entities. Furthermore, in several cases, EBSA and the states provided a range in response to our request for data. When they did this, we used the lowest number in the range. For example, whereas EBSA reported unpaid claims for one of these entities from $13 million to $20 million, we reported unpaid claims as $13 million. In some cases, EBSA and the states reported that the data they provided were estimated. Employers Mutual, LLC was one of the most widespread unauthorized entities operating in recent years, covering a significant number of employers and policyholders and accounting for millions of dollars in unpaid claims during a 10-month period in 2001. According to court documents and DOL, four of the entity’s principals were associated with the collection of approximately $16 million in premiums from over 22,000 people and with the entity’s nonpayment of more than $24 million in medical claims. DOL and states took actions to terminate Employers Mutual’s operations and an independent fiduciary was appointed by a U.S. district court in December 2001 to administer the entity and, if necessary, implement its orderly termination. In September 2003, the court ordered the principals to pay $7.3 million for their breach of fiduciary responsibilities. Employers Mutual was established in Nevada in July 2000 and began operations in January 2001. The name Employers Mutual is similar to Employers Mutual Casualty Company, a long-established Iowa-based insurance company marketed throughout the United States, which had no affiliation with Employers Mutual. By February 2001, Employers Mutual had established 16 associations covering a wide array of industries and professions, such as the American Coalition of Consumers and the National Association of Transportation Workers, that created employee health benefit plans for association members to join. Employers Mutual was responsible for managing the plans offered through these 16 associations, which claimed to be fully funded and were created to cover certain medical expenses of enrolled participants. Employers Mutual ultimately claimed that its association structure did not require it to register or to seek licensure from states, and that it also precluded the entity from DOL regulation under the Employee Retirement Income Security Act of 1974 (ERISA). Employers Mutual’s principals contracted with legitimate firms to market the plans and process the claims, and with their own companies purportedly to provide health care and investment services. Licensed insurance agents marketed the 16 plans nationwide. Employers Mutual hired a firm to process the claims from members of its associations’ employee health benefits plans and to handle other administrative tasks from January 2001 until the firm terminated its services in October 2001 for, among other reasons, nonpayment of a bill. According to court filings, Employers Mutual also contracted with four firms, purportedly health care provider networks and investment firms, established and owned by Employers Mutual principals. A district court later cited evidence that the provider networks were paid despite the fact that one of them had no employees and provided no services to plan members. Furthermore, the district court noted that no contracts between the investment firms and Employers Mutual were presented into evidence and no information was introduced concerning the services these firms performed for this entity. From the time Employers Mutual commenced operations in January 2001 through October 2001, more than 22,000 policyholders in all 50 states and the District of Columbia paid approximately $16.1 million in premiums. According to court documents and the independent fiduciary appointed to administer Employers Mutual, one of this entity’s principals allegedly set the premiums for the 16 plans after he calculated the average of sample rates posted by other insurance companies on the Internet and reduced them to ensure that Employers Mutual would offer competitive prices. DOL has determined that of the $16.1 million collected in premiums, Employers Mutual paid about $4.8 million in medical claims. According to DOL, the principals made payments for other purposes besides the payment of claims, including about $2.1 million in marketing, about $0.6 million in claims processing, and about $1.9 million to themselves or their companies. Approximately $1.9 million in Employers Mutual’s assets had been recovered by the independent fiduciary since his appointment in December 2001 through February 2004. The independent fiduciary and DOL reported that they were prevented from fully accounting for the money collected and paid out by Employers Mutual, its principals, and contracted companies due to the scope of its operations and the disarray and incompleteness of the records they were able to recover. The independent fiduciary reported that insurance claims totaling over $24 million remain unpaid as of February 2004. He paid $134,000 to a prescription service provider immediately after his appointment, and no additional medical claims have been paid. In March 2003, the fiduciary filed suit in federal court to recover the unpaid claims from the insurance agents who marketed Employers Mutual plans. When Nevada insurance regulators became aware of Employers Mutual, they found that it was transacting insurance business without a certificate of authority as required by Nevada law. Nevada therefore issued a cease and desist order against Employers Mutual in June 2001. In August 2001, Florida insurance regulators found that Employers Mutual was engaged in the business of insurance, including operating as a MEWA, without a certificate of authority as required by Florida law. Florida ordered Employers Mutual to stop selling insurance within Florida’s borders pending an appeal by the entity, although at the time the state did not find evidence of delays or failures to pay medical claims. Other states, including Alabama, Colorado, Oklahoma, Texas, and Washington, filed cease and desist orders against Employers Mutual by December 2001. On November 21, 2001, the Nevada Commissioner of Insurance signed an Order of Seizure and Supervision seizing and taking possession of Employers Mutual funds held in Nevada bank accounts and granting the Nevada Commissioner supervision over the assets of Employers Mutual in Nevada. Nevada also reported that it engaged in a discussion involving 26 state insurance departments that led to an agreement with Employers Mutual to facilitate payments of claims nationwide. On December 13, 2001, the U.S. District Court for the District of Nevada granted a TRO against Employers Mutual and its four principals, and on December 20, 2001, the Nevada Commissioner surrendered all of Employers Mutual’s assets that she had recently seized to the independent fiduciary. In the TRO, DOL alleged that the principals used plan assets to benefit themselves; failed to discharge their obligations as fiduciaries with the loyalty, care, skill, and prudence required by ERISA; and paid excessive compensation for services provided to Employers Mutual. The TRO temporarily froze the assets of all the principals involved in this entity and prohibited them from conducting further activities related to the business. It also appointed an independent fiduciary to administer Employers Mutual and associated entities and, if necessary, implement their orderly termination. After a subsequent hearing, the U.S. District Court for the District of Nevada issued a preliminary injunction on February 1, 2002, leading to the interim shutdown of Employers Mutual nationwide. On April 30, 2002, the same court issued a quasi-bankruptcy order establishing a procedure for the orderly dissolution of the plans and payment of claims with assets recovered by DOL and the independent fiduciary. On September 10, 2003, the court issued a default judgment granting a permanent injunction against the principals and ordered them to pay $7.3 million in losses suffered as a result of their breach of fiduciary obligations to beneficiaries. In March 2003, the independent fiduciary filed suit in Nevada on behalf of the participants against Employers Mutual’s principals alleging, among other things, that they participated in racketeering, fraud, and conspiracy. The independent fiduciary also sued the insurance agents, who either marketed or sold the plans, for malpractice as part of that action. The fiduciary has requested damages and relief for unpaid or unreimbursed claims. In October 2003, the court ordered the suit to mediation in February 2004. The fiduciary and some agents, before the beginning of mediation, reached a proposed settlement that was before the court for approval as of February 2004. Figure 4 contains a chronology of events from Employers Mutual’s establishment to state and federal actions to shut it down. Plans that provide reduced rates for selected medical services rather than comprehensive health insurance benefits are known as discount plans. These plans are not health insurance as they do not assume any financial risk. Discount plans were marketed in most states. However, in some states, discount plans were inappropriately marketed by using health insurance terms and these misrepresented plans were targeted to small employers. Discount plans charge consumers a monthly membership fee in exchange for a list of health care professionals and others who will provide their services at a discounted rate. Because they do not assume any financial risk or pay any health care claims, discount plans are not health insurance. Most often, these plans provide discounts for such services as physicians, dental care, vision care, or pharmacy. Some may also provide discounts for services provided by hospitals, ambulances, chiropractors, and other types of specialty medical care. The discounts offered and monthly fees vary by plan. For example, a consumer may pay $10 per month to a discount plan for access to lower cost dental services. A dentist participating in the discount plan may charge plan members 20 percent less than nonmembers. Therefore, if the fee is typically $60 for a dentist to perform certain procedures that help prevent disease—for example, removing plaque and tartar deposits from teeth—the plan member will pay a discounted fee of $48 to the dentist. Most state insurance departments do not regulate discount plans because they are not considered to be health insurance. None of the insurance departments in the states that we reviewed—Colorado, Florida, Georgia, and Texas—regulated discount plans. Thus, according to a state official, while state insurance departments might be aware that discount plans operated within their borders, they would not necessarily be able to quantify the extent to which they exist. When consumers complain about discount plans in Colorado, for example, the insurance department refers the complaints to the Attorney General. State officials indicated that discount plans are not problematic as long as companies market and advertise these plans accurately and consumers understand that these products are not health insurance. Advertisements for discount plans can be found on the Internet, through infomercials on television, on the radio, in local newspapers, on signs posted along roadways, in unsolicited “spam” e-mails or faxes, and in direct marketing and mailings. According to state officials, discount plans have positive and negative aspects. They said that discount plans can save some money for people who do not have health insurance and who know they will be using health care services. In addition, they said consumers can use these plans to augment health insurance policies providing only catastrophic coverage. However, they said that consumers needed to understand that using discount plans can result in higher out-of-pocket costs than typical health insurance. For example, getting a 20 percent discount on heart-bypass surgery at the average U.S. charge could still cost an individual about $40,000 out-of-pocket. Furthermore, it can be difficult for consumers to determine if providers are actually giving them a discount, as most providers do not list their charges. Discount plans were sold in most states. About 78 percent of the states responding to our survey (40 of 51 states) reported that discount plans were sold within their borders from 2000 through 2002. (See table 5.) Most states that reported discount plans were sold within their borders also reported that these plans were not marketed as health insurance. Most of the states that reported discount plans from 2000 through 2002 did not indicate any problems with how they were advertised. Fourteen states reported that discount plans were misrepresented as health insurance to some degree. For example, the Texas insurance department reported that it reviewed discount plans’ advertising materials that consumers and insurance agents brought to its attention. According to a state insurance department official, one issue that repeatedly arose with the marketing materials that the state reviewed was that some discount plans were inappropriately advertised as “health plans,” as “health benefits,” or with some other phrase similar to insurance. Furthermore, this official said that many discount plans had been marketed in Texas. Connecticut officials, however, were aware of only one discount plan, an out-of-state entity, which inappropriately advertised in the state as a “medical plan” providing affordable health care to families and individuals. The state officials reported that they did not know whether any Connecticut residents had subscribed. Utah officials reported that insurance terms were inappropriately used—for example, all preexisting conditions were immediately accepted and everyone was accepted regardless of medical history. According to Utah officials, advertisements did not usually state that they were discount plans and not health insurance, but when they did, the print was small and was hard to read. As with unauthorized entities, small employers may be particularly vulnerable to discount plans that are misrepresented as insurance. Officials in 8 of the 14 states that reported discount plans were misrepresented as insurance also reported that the discount plans were marketed to small employers. These eight states were Maine, Nebraska, Oklahoma, Tennessee, Texas, Utah, Washington, and Wyoming. In the fall of 2001, NAIC developed a consumer alert to help prevent unauthorized entities from operating. This alert is intended to be a model states can use to help inform the public about these entities. NAIC distributed the consumer alert to all the states and also made it available on its Web site. The alert provides tips that consumers can follow to help protect themselves from the entities and sources to contact for additional information about these entities. (See fig. 5.) On August 6, 2002, the Secretary of Labor sent a memorandum to over 70 business leaders and associations asking them to distribute insurance tips for small employers to follow when they purchased health insurance for their employees. Because, according to the Secretary, “scam artists” were aggressively targeting small employers and their employees, the Secretary advised small employers to take extra precautions when obtaining health care coverage. The tips, entitled “How to Protect Your Employees When Purchasing Health Insurance,” informed small employers that, among other things, they should verify with a state insurance department whether any unfamiliar companies or agents were licensed to sell health benefits coverage. DOL has updated these tips and makes them available on its Web site. Figure 6 includes the current version of DOL’s tips. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Health insurance premiums have increased at double-digit rates over the past few years. While searching for affordable options, some employers and individuals have purchased coverage from certain entities that are not authorized by state insurance departments to sell this coverage. Such unauthorized entities--also sometimes referred to as bogus entities or scams--may collect premiums and not pay some or all of the legitimate medical claims filed by policyholders. GAO was asked to identify the number of these entities that operated from 2000 through 2002, the number of employers and policyholders covered, the amount of unpaid claims, and the methods state and federal governments employed to identify such entities and to stop and prevent them from operating. GAO analyzed information on these entities obtained from the Department of Labor (DOL) and from a survey of the 50 states and the District of Columbia. GAO also interviewed officials at DOL headquarters, at three regional offices, and at state insurance departments responsible for investigating these entities in four states--Colorado, Florida, Georgia, and Texas. DOL and the states identified 144 unique entities not authorized to sell health benefits coverage from 2000 through 2002. The number of entities newly identified increased each year, almost doubling from 31 in 2000 to 60 in 2002. Many of these entities targeted employers and policyholders in multiple states, and, of the seven states with 25 or more entities, five were located in the South. DOL and the states reported that the 144 unique entities (1) sold coverage to at least 15,000 employers, including many small employers; (2) covered more than 200,000 policyholders; and (3) left at least $252 million in unpaid medical claims, only about 21 percent of which had been recovered at the time of GAO's 2003 survey. States and DOL often identified these entities based on consumer complaints. DOL often relied on states to stop these entities within their borders while DOL focused its investigations on larger entities operating in multiple states and, in three cases, obtained court orders to stop these entities nationwide. Most of the states' prevention activities were geared to increasing public awareness and notifying the agents who sold this coverage, while DOL focused its efforts on alerting employer groups and small employers. In commenting on a draft of this report, DOL, the National Association of Insurance Commissioners, Florida, and Texas highlighted their efforts to increase public awareness, coordinate investigations, and take enforcement actions regarding these entities.